Salesforce org
Use a sandbox or scratch org with System Administrator access for the initial setup.
This guide is the fastest safe path to a first working agent in the core runtime.
The goal is not to build your final production agent on the first try. The goal is to prove that the core path works:
By the end of this guide, you should have:
LLMConfiguration__cAIAgentDefinition__cAgentExecution__c and ExecutionStep__c records proving the runtime is workingLLMConfiguration__c record.AIAgentDefinition__c record.By the end of this guide, you should be able to:
AgentExecution__c and ExecutionStep__c records being createdSalesforce org
Use a sandbox or scratch org with System Administrator access for the initial setup.
Provider access
You need an API key or equivalent access for OpenAI or another OpenAI-compatible provider.
CLI tooling
Use Salesforce CLI for source deployments. CumulusCI is optional but useful for scratch org flows.
The biggest mistake teams make is trying to validate too many things at once.
For your first run, avoid combining:
You want the first result to answer one question clearly: can this org run the core framework end to end?
git clone https://github.com/iamsonal/aiAgentStudio.gitcd aiAgentStudio
sf org login web -a your-org-aliassf project deploy start -d force-app/main/default -o your-org-aliasOptional seed data for local experimentation:
sf project deploy start -d seed-data/main/default -o your-org-aliasIf you use CumulusCI for scratch org setup, the repo includes a dev flow that can deploy the framework and convenience metadata in one pass:
git clone https://github.com/iamsonal/aiAgentStudio.gitcd aiAgentStudiocci flow run dev_org --org devIf you are debugging deployment issues, solve those before touching prompts or capabilities. A clean metadata deployment is the foundation for everything else.
The core package includes an OpenAI-oriented setup:
OpenAI_APIOpenAIECOpenAIProviderAdapterFor OpenAI:
OpenAIEC.OpenAIKey authentication parameter.For another OpenAI-compatible provider:
LLMConfiguration__c.ProviderAdapterClass__c to OpenAIProviderAdapter.If model calls fail later, credentials are one of the first places to check. Many “agent” failures are really auth or endpoint configuration failures.
Create an LLMConfiguration__c record for the provider and model you want to use.
| Field | Example value | Notes |
|---|---|---|
DeveloperName__c | OpenAI_GPT4oMini | Stable API-facing name |
NamedCredential__c | OpenAI_API | The credential created in the previous step |
ProviderAdapterClass__c | OpenAIProviderAdapter | Core adapter for OpenAI-compatible APIs |
DefaultModelIdentifier__c | gpt-4o-mini | Model name to send on provider calls |
DefaultTemperature__c | 0.3 | Lower values are better for predictable tool usage |
IsActive__c | true | Enable for agent use |
For first rollout, prefer a lower temperature. It usually makes tool selection and prompt behavior more stable while you are still validating the runtime.
Create an AIAgentDefinition__c record with a narrow first use case.
| Field | Suggested starting value |
|---|---|
DeveloperName__c | Sales_Assistant |
RuntimeStrategy__c | Conversational |
LLMConfiguration__c | your new LLM configuration |
MemoryStrategy__c | Buffer Window |
HistoryTurnLimit__c | 8 to 10 |
IsActive__c | true |
Suggested prompt pair:
Identity promptYou are a Salesforce assistant for the sales team.You help users inspect account, contact, and opportunity data and propose next steps.Instructions prompt- Be concise and explicit about what you know from Salesforce data- Ask clarifying questions when the user does not provide a unique identifier- Confirm before any record mutation- Prefer read-only actions until the user clearly asks for a changeMost first deployments fail because the first agent is asked to do too much at once. A narrow first agent is easier to validate because:
Good first agents usually do one of these:
Poor first agents usually promise to “help with anything.”
Example capability built on GetRecordDetails:
| Field | Example value |
|---|---|
CapabilityName__c | lookup_contact_by_email |
ImplementationType__c | Standard |
StandardActionType__c | GetRecordDetails |
ExposureLevel__c | External |
Backend configuration:
{ "variant": "field_list", "objectApiName": "Contact", "defaultFields": ["Id", "FirstName", "LastName", "Email", "Phone", "Title", "Account.Name"]}Parameters schema:
{ "type": "object", "required": ["Email"], "properties": { "Email": { "type": "string", "description": "Exact email address of the contact to retrieve" } }}Example capability built on CreateRecord:
| Field | Example value |
|---|---|
CapabilityName__c | create_follow_up_task |
ImplementationType__c | Standard |
StandardActionType__c | CreateRecord |
HITLMode__c | Confirmation |
Backend configuration:
{ "objectApiName": "Task", "defaultFieldValues": { "Status": "Not Started", "Priority": "Normal" }}Parameters schema:
{ "type": "object", "required": ["recordData"], "properties": { "recordData": { "type": "object", "description": "Field names and values for the task to create" } }}The read-only path should be your first milestone. Once the runtime can retrieve data correctly, you have a stable base for later actions, approvals, and broader prompts.
You can test the framework through the chat UI or the REST surface.
aiAssistantChat component to a page.Send a request to:
/services/apexrest/ai/agent/processUse a payload that includes:
originalUserIdagentDefinitionIdturnIdentifieruserMessageIf you start with a read-only lookup capability, use prompts that are easy to validate:
Avoid testing with broad prompts like “help me with sales” until the basic runtime path is proven.
When a test completes, inspect these records in Salesforce:
AgentExecution__c to confirm lifecycle state and resolved runtime contextExecutionStep__c to see whether the model responded directly or invoked a toolInteractionSession__c if you are testing a session-aware pathInteractionMessage__c if a transport-level interaction is involvedThose records tell you much more than the final UI output alone.
Before adding more capabilities, confirm these outcomes:
AgentExecution__c record is createdExecutionStep__c rows are writtenIf all four are true, you have a real working base, not just a demo that happened to answer once.
LLMConfiguration__c record points to the wrong Named CredentialAfter your first agent works, your next goal should not be “add everything.” It should be “make one behavior trustworthy.”
That usually means: