Conversational Runtime
Used when the system needs to preserve ongoing interaction continuity. Chat assistants, email conversations, and session-aware messaging flows fit this shape.
This page explains the framework in human terms. It is meant to give you the right mental model before you read dozens of classes or trace a request through logs.
Loom is a governed AI runtime for Salesforce.
It does more than send a prompt to a model and return text. It accepts requests from multiple entry surfaces, chooses the right runtime style, manages tools and continuity, applies trust and approval controls, and writes what happened into durable records that can be audited and operated.
That is what makes it a runtime rather than a thin wrapper around an LLM API.
The framework works well because it separates concerns that simpler systems often blur together.
AIAgentDefinition__c record describes what an agent is.AgentExecution__c record represents a unit of work performed by that agent.InteractionSession__c record is the continuity anchor for a conversation or thread.InteractionMessage__c record is the transport-level inbound or outbound message.ExecutionStep__c record is the detailed audit trail of what happened during execution.Those are deliberately not interchangeable. A conversation is not the same thing as a runtime work item, and a transport message is not the same thing as an execution step. Once you keep those boundaries clear, the rest of the framework becomes much easier to follow.
The shortest accurate summary is:
a request comes in, the system figures out the context around that request, starts or resumes execution, lets strategy and channel shape the runtime behavior, runs the LLM and tool loop, and then writes the resulting state back into durable records.
If you want one picture in your head, use this:
graph TD
req["Inbound Request"] --> surface["Entry Surface"]
surface --> routing["Routing And Session Resolution"]
routing --> exec["AgentExecutionService"]
exec --> runtime["Runtime Strategy"]
runtime --> llm["LLMInteractionService"]
llm --> orch["OrchestrationService"]
orch --> tools["CapabilityExecutionService"]
tools --> orch
orch --> delivery["ChannelDeliveryPolicyService"]
delivery --> out["Channel Output"]
exec --> state["AgentStateService"]
exec --> steps["ExecutionStepService"]
routing --> session["InteractionSessionService"]
routing --> messages["InteractionMessageService"]
This diagram is intentionally high level. It is not a class inventory. It is the shape of the runtime.
The architecture makes the most sense when you keep three dimensions separate.
This answers: what style of runtime should this request use?
Conversational for ongoing interactions where continuity mattersDirect for bounded work such as classification, enrichment, summarization, or controlled automationThis answers: where did the request come from, and how should replies be handled?
Some work is just one normal agent execution. Some work belongs to a larger multi-agent process. That is why pipeline-style composition lives beside the normal runtime rather than being forced into the same conceptual box.
This separation is one of the strongest parts of the design. If runtime strategy and interaction channel were fused together, the framework would quickly collapse into brittle variants like “ChatAgent,” “EmailAgent,” “SMSAgent,” “WhatsAppAgent,” or “TeamsAgent.” Instead, runtime style and transport behavior stay orthogonal.
Every request starts somewhere:
The first job of the system is to normalize the request and figure out enough context to make the correct runtime decision.
For channel-based traffic, the framework uses services such as:
InboundInteractionPipelineServiceInteractionSessionServiceInteractionMessageServiceChannelRoutingServiceThose services help answer the early questions that matter:
Once that context is resolved, AgentExecutionService.startExecution() becomes the main execution
boundary. It loads configuration, decides whether the request belongs to a normal runtime or another
composition path, resolves strategy and channel behavior, and delegates to the correct
implementation.
That handoff is a good place to orient yourself when you are new to the codebase. It is where the framework stops thinking about “what came in” and starts thinking about “what runtime should own this work.”
Conversational Runtime
Used when the system needs to preserve ongoing interaction continuity. Chat assistants, email conversations, and session-aware messaging flows fit this shape.
Direct Runtime
Used when the request behaves more like a bounded task. These paths care more about completing one work unit correctly than about long-lived conversational continuity.
This distinction matters because it changes what the runtime cares about.
Eventually the runtime reaches LLMInteractionService.
That is where prompts, tool schemas, context, memory, masking, and safety behavior come together to form the model request.
When a result comes back, OrchestrationService interprets it. At that point the framework decides
whether the model:
If a tool needs to run, CapabilityExecutionService becomes the tool-execution seam. A capability
might map to:
If the result should be visible to a user, ChannelDeliveryPolicyService becomes the final
governor. It asks the channel adapter how the response should be handled. Depending on the channel
and route, that may mean replying immediately, saving a draft, suppressing output, or pushing the
result through a review flow.
This is also why the framework feels production-oriented. Delivery is treated as a governed runtime decision, not an afterthought that automatically follows every model response.
One of the most common questions is why the framework has both session records and execution records, or both transport message records and execution step records.
The short answer is that they represent different layers of truth.
InteractionSession__c answers: what conversation or thread does this belong to over time?AgentExecution__c answers: what runtime work item are we doing right now?InteractionMessage__c answers: what came in or went out over the channel?ExecutionStep__c answers: what did the runtime do while handling this execution?If those concepts were collapsed into one record type, the system might be easier to start but much harder to extend, audit, and debug.
Provider-backed channels such as SMS, WhatsApp, Slack, and Teams are a good example of how the framework handles transport-specific concerns without polluting the generic agent API.
Webhook verification, signature checks, and payload parsing happen outside the generic agent runtime boundary. The provider transport is resolved through endpoint metadata, and the normalized inbound request then enters the same broader execution model the rest of the framework uses.
The design goal is stability of the runtime model. Adding another provider-backed channel should not require inventing a new architecture. It should mostly require:
The framework is heavily metadata-driven, and that is one of the main reasons it can grow without constant code changes.
Agent Metadata
AIAgentDefinition__c, AgentCapability__c, and LLMConfiguration__c define agent behavior,
capability exposure, and provider/model configuration.
Routing Metadata
AgentChannelRoute__mdt, ChannelEndpoint__mdt, and InteractionChannelType__mdt define how
traffic is interpreted and which agent owns it.
Action Metadata
ActionHandlerRegistry__mdt maps packaged action types to concrete handlers.
In practical terms:
The asynchronous model is not optional complexity added for style. It exists because Salesforce transactions have real constraints around callouts, DML sequencing, retries, and concurrency.
That is why queueables and platform events appear so often in the architecture. If a request can be completed inline and the transaction is still safe, part of the runtime may proceed synchronously. If not, the framework persists the required state and continues asynchronously.
TransactionContext exists for the same reason. It helps the runtime remember what is still safe
in the current transaction:
| Service | Human description |
|---|---|
AgentExecutionService | Main traffic controller and public entrypoint for runtime work |
RuntimeRegistryService | Resolves strategy, channel, and runtime traits |
InteractionChannelRegistryService | Lookup layer for channel behavior |
LLMInteractionService | Builds and sends the model request |
OrchestrationService | Interprets model output and decides what happens next |
CapabilityExecutionService | Runs tools and capability implementations |
AgentStateService | Manages execution lifecycle state |
ExecutionStepService | Records detailed execution history |
InteractionSessionService | Manages continuity across conversations or threads |
InteractionMessageService | Manages transport-level message history |
ChannelDeliveryPolicyService | Governs final response delivery |
You do not need to memorize every service at once. These are the ones that define the framework’s shape.
If you want to translate this architecture into source code, a productive reading order is:
AgentExecutionServiceRuntimeRegistryServiceLLMInteractionServiceOrchestrationServiceCapabilityExecutionServiceAgentStateService, ExecutionStepService, and the session/message servicesWhen debugging, it helps to walk the request in the same order the framework does.
AgentExecution__c for lifecycle state, turn identifier, trigger payload, and execution type.ExecutionStep__c to see what the runtime actually did.InteractionMessage__c captured the transport event correctly.That order is usually more useful than jumping into one class at random.
The framework is best understood as a governed AI runtime for Salesforce.
It supports multiple execution styles, multiple interaction channels, durable continuity, metadata-driven routing, detailed auditability, and explicit separation between transport, execution, and trace records.
If you keep one model in your head, keep this one:
a request enters through an entry surface, routing and context services normalize the inbound
situation, AgentExecutionService starts or resumes work, strategy and channel determine how that
work behaves, the LLM and tool loop does the reasoning, and the resulting state is written back
into executions, sessions, messages, and steps.