Skip to content

Getting Started

This guide is the fastest safe path to a first working agent in the core runtime.

The goal is not to build your final production agent on the first try. The goal is to prove that the core path works:

  • the metadata deploys
  • provider credentials work
  • the agent can call a model
  • the model can call at least one capability
  • execution and trace records are created in the org
Best First Read Read-Only First Core Only

What You Are Building

By the end of this guide, you should have:

  • one active LLMConfiguration__c
  • one active AIAgentDefinition__c
  • one or two narrow capabilities
  • one successful test path through chat or REST
  • visible AgentExecution__c and ExecutionStep__c records proving the runtime is working

Setup Flow

  1. Deploy the core metadata.
  2. Configure a provider Named Credential.
  3. Create an LLMConfiguration__c record.
  4. Create an AIAgentDefinition__c record.
  5. Add one or two capabilities.
  6. Test through chat or API.

What Success Looks Like

By the end of this guide, you should be able to:

  • start a new execution for a real agent
  • see AgentExecution__c and ExecutionStep__c records being created
  • confirm the model can call at least one capability correctly
  • understand whether the issue is in deployment, credentials, configuration, or runtime behavior if something fails

Prerequisites

Salesforce org

Use a sandbox or scratch org with System Administrator access for the initial setup.

Provider access

You need an API key or equivalent access for OpenAI or another OpenAI-compatible provider.

CLI tooling

Use Salesforce CLI for source deployments. CumulusCI is optional but useful for scratch org flows.

Before You Start

The biggest mistake teams make is trying to validate too many things at once.

For your first run, avoid combining:

  • multiple write-capable actions
  • external callouts
  • approval chains
  • custom Apex extensions
  • complex prompt logic

You want the first result to answer one question clearly: can this org run the core framework end to end?

Step 1: Deploy the Core Runtime

Terminal window
git clone https://github.com/iamsonal/aiAgentStudio.git
cd aiAgentStudio
sf org login web -a your-org-alias
sf project deploy start -d force-app/main/default -o your-org-alias

Optional seed data for local experimentation:

Terminal window
sf project deploy start -d seed-data/main/default -o your-org-alias

If you are debugging deployment issues, solve those before touching prompts or capabilities. A clean metadata deployment is the foundation for everything else.

Step 2: Configure Provider Credentials

The core package includes an OpenAI-oriented setup:

  • Named Credential: OpenAI_API
  • External Credential: OpenAIEC
  • Provider adapter: OpenAIProviderAdapter

For OpenAI:

  1. Open Setup → External Credentials.
  2. Open OpenAIEC.
  3. Edit the principal and set the OpenAIKey authentication parameter.
  4. Save the credential and confirm the corresponding Named Credential is active.

For another OpenAI-compatible provider:

  1. Clone the same credential pattern with the provider’s base URL.
  2. Configure the provider-specific authentication method.
  3. Point LLMConfiguration__c.ProviderAdapterClass__c to OpenAIProviderAdapter.

If model calls fail later, credentials are one of the first places to check. Many “agent” failures are really auth or endpoint configuration failures.

Step 3: Create an LLM Configuration

Create an LLMConfiguration__c record for the provider and model you want to use.

FieldExample valueNotes
DeveloperName__cOpenAI_GPT4oMiniStable API-facing name
NamedCredential__cOpenAI_APIThe credential created in the previous step
ProviderAdapterClass__cOpenAIProviderAdapterCore adapter for OpenAI-compatible APIs
DefaultModelIdentifier__cgpt-4o-miniModel name to send on provider calls
DefaultTemperature__c0.3Lower values are better for predictable tool usage
IsActive__ctrueEnable for agent use

For first rollout, prefer a lower temperature. It usually makes tool selection and prompt behavior more stable while you are still validating the runtime.

Step 4: Create an Agent Definition

Create an AIAgentDefinition__c record with a narrow first use case.

FieldSuggested starting value
DeveloperName__cSales_Assistant
RuntimeStrategy__cConversational
LLMConfiguration__cyour new LLM configuration
MemoryStrategy__cBuffer Window
HistoryTurnLimit__c8 to 10
IsActive__ctrue

Suggested prompt pair:

Identity prompt
You are a Salesforce assistant for the sales team.
You help users inspect account, contact, and opportunity data and propose next steps.
Instructions prompt
- Be concise and explicit about what you know from Salesforce data
- Ask clarifying questions when the user does not provide a unique identifier
- Confirm before any record mutation
- Prefer read-only actions until the user clearly asks for a change

Why start narrow

Most first deployments fail because the first agent is asked to do too much at once. A narrow first agent is easier to validate because:

  • the prompts are simpler
  • the capability set is smaller
  • tool selection is easier to interpret
  • failures are easier to trace to one configuration choice

Good first agents usually do one of these:

  • look up a record by a clear identifier
  • summarize a small set of Salesforce data
  • answer questions over a tightly scoped business domain

Poor first agents usually promise to “help with anything.”

Step 5: Add Initial Capabilities

Example capability built on GetRecordDetails:

FieldExample value
CapabilityName__clookup_contact_by_email
ImplementationType__cStandard
StandardActionType__cGetRecordDetails
ExposureLevel__cExternal

Backend configuration:

{
"variant": "field_list",
"objectApiName": "Contact",
"defaultFields": ["Id", "FirstName", "LastName", "Email", "Phone", "Title", "Account.Name"]
}

Parameters schema:

{
"type": "object",
"required": ["Email"],
"properties": {
"Email": {
"type": "string",
"description": "Exact email address of the contact to retrieve"
}
}
}

The read-only path should be your first milestone. Once the runtime can retrieve data correctly, you have a stable base for later actions, approvals, and broader prompts.

Step 6: Test the Agent

You can test the framework through the chat UI or the REST surface.

Chat UI path

  1. Open Lightning App Builder.
  2. Add the aiAssistantChat component to a page.
  3. Set the agent developer name to your agent record.
  4. Save and activate the page.

API path

Send a request to:

/services/apexrest/ai/agent/process

Use a payload that includes:

  • originalUserId
  • agentDefinitionId
  • turnIdentifier
  • userMessage

Suggested first test prompts

If you start with a read-only lookup capability, use prompts that are easy to validate:

  • “Look up the contact with email john.smith@example.com
  • “Show me the current details for case 00001234”
  • “Find the opportunity with id 006…”

Avoid testing with broad prompts like “help me with sales” until the basic runtime path is proven.

What To Inspect After A Test Run

When a test completes, inspect these records in Salesforce:

  • AgentExecution__c to confirm lifecycle state and resolved runtime context
  • ExecutionStep__c to see whether the model responded directly or invoked a tool
  • InteractionSession__c if you are testing a session-aware path
  • InteractionMessage__c if a transport-level interaction is involved

Those records tell you much more than the final UI output alone.

Step 7: Validate the Runtime

Before adding more capabilities, confirm these outcomes:

  • an AgentExecution__c record is created
  • ExecutionStep__c rows are written
  • the agent can call the read-only capability correctly
  • any write action pauses for confirmation or approval when configured

If all four are true, you have a real working base, not just a demo that happened to answer once.

Common First-Run Mistakes

  • provider credentials are set up, but the LLMConfiguration__c record points to the wrong Named Credential
  • the agent is active, but no capability is visible to the model
  • the capability description is too vague, so the model never selects it
  • the first agent mixes retrieval, mutation, and external callouts before the base runtime path is validated
  • the page or API caller points to the wrong agent developer name

What To Do Next

After your first agent works, your next goal should not be “add everything.” It should be “make one behavior trustworthy.”

That usually means:

  • tightening prompts
  • improving capability descriptions and schemas
  • turning on the right trust controls
  • choosing whether the pattern should stay conversational or become direct
  • adding one carefully governed write action only after retrieval behavior is stable

Next Steps