AI Agent Studio is built on a modular, plugin-based architecture using Strategy, Factory, and Chain of Responsibility patterns. The framework separates concerns across orchestration, execution, memory, security, and observability layers.
Core Components
Orchestrators
Four execution patterns: Conversational (multi-turn chat), Function (single-task with sync/async routing), Workflow (multi-agent state machines), Email (thread processing with auto-reply).
All extend BaseAgentOrchestrator and implement IAgentOrchestrator interface.
LLM Providers
Multi-provider support: OpenAI, Claude (addon), Gemini (addon). Extensible via ILLMProviderAdapter interface.
BaseProviderAdapter provides common HTTP handling, retry logic, and error normalization.
Actions
Standard actions: GetRecordDetails, CreateRecord, UpdateRecord, FlowHandler, PostChatter.
Dynamic data enrichment: Supply related records, user context, and computed data to agents.
Implement IAgentContextProvider interface with bulk-safe, security-enforced queries.
Execution Lifecycle
Request Entry
Request enters via AgentExecutionService.startExecution() (invocable from Flow/Apex) or REST API endpoint /services/apexrest/ai/agent/process/.
Orchestrator Routing
AgentExecutionService.getOrchestrator() uses AgentOrchestratorMapping__mdt to route by agent type to ConversationalOrchestrator, FunctionOrchestrator, WorkflowOrchestrator, or EmailOrchestrator.
Performance: AsyncDispatchType__c (High for Platform Events / Low for Queueable), MaxProcessingCycles__c (Max LLM turns per execution), EnableDependencyValidation__c (Enforce tool dependency graph).
Workflow: ToolDependencyGraph__c (JSON dependency graph for validation).
Tool/function definitions:
Basic Config: CapabilityName__c (Tool name exposed to LLM), Description__c (When and how to use this tool - critical for LLM guidance), ImplementationType__c (Standard/Apex/Flow), StandardActionType__c (Maps to StandardActionHandler__mdt), ImplementationDetail__c (Apex class name or Flow API name).
Schema: Parameters__c (JSON Schema for LLM defining expected arguments), BackendConfiguration__c (Admin config JSON passed to action implementation).
Execution: RunAsynchronously__c (Execute in separate queueable), FailFastOnError__c (Immediate failure on error vs pass to LLM), HITLMode__c (Human-in-the-loop mode: Disabled/Confirmation/Approval/ConfirmationThenApproval), ExposureLevel__c (External visible to LLM / Internal framework only / Disabled).
Execution tracking:
Status Fields: ExecutionStatus__c (Idle/Processing/Completed/Failed/Cancelled for lifecycle-level), ProcessingStatus__c (Idle/Processing/Awaiting Action/Awaiting Followup/Failed for turn-level), CurrentTurnIdentifier__c (Stale execution detection via StaleJobDetector).
Context: SourceRecordId__c (Context record like Account or Case), OriginalUserId__c (User who initiated execution), ServiceUserId__c (Service user if RequiresServiceUserContext__c).
Multi-Record DLQ: BatchId__c (Correlation ID for grouping related executions), Priority__c (Execution priority where higher = more urgent, default 0), ProcessingStartedAt__c (Timestamp when worker started for stale detection), RetryCount__c, MaxRetries__c, NextRetryAt__c (Exponential backoff tracking).
Async Coordination: PendingAsyncToolCount__c (Atomic counter for async tool tracking).
Enables multiple LLM calls within a single transaction when intervening tools don’t perform DML or callouts.
Flow: Entry enables deferred DML → ExecutionStepService buffers inserts → After each LLM call, check eligibility → If safe: continue in same transaction → On completion: commit all buffered DML
Eligibility: Deferred mode enabled + Pre-existing execution + No DML/callouts + Under max LLM calls limit
Multi-record executions use DLQ pattern for resilient processing at scale.
Flow: Dispatcher creates AgentExecution__c records with BatchId__c → Enqueue limited workers → Worker claims with FOR UPDATE → Process → On completion, trigger chains next pending record → Workers exhaust naturally