Salesforce security model
User context, sharing, CRUD, and field-level access remain the first boundary for all framework behavior.
Loom is designed so security and governance live inside the runtime itself, not only in process documents, training, or admin convention.
That means production safety is not a single switch. It is the combined effect of:
Salesforce security model
User context, sharing, CRUD, and field-level access remain the first boundary for all framework behavior.
Trust layers
Masking, prompt safety, tool gating, and approval controls reduce risk before an action is executed or a prompt leaves Salesforce.
Operational auditability
Executions and execution steps are persisted so teams can review what happened after the fact.
Controlled side effects
Human-in-the-loop settings let teams keep sensitive changes under explicit review.
The framework assumes that enterprise AI risk comes from more than one place.
Sometimes the risk is data exposure: too much record content is sent to a provider or shown to a user who should not see it.
Sometimes the risk is unsafe action execution: the model chooses a tool that can update records, send email, or call an external API without enough constraint.
Sometimes the risk is operational opacity: the system did something important, but nobody can reconstruct what happened later.
The core runtime addresses those risks in layers rather than pretending one control solves all of them.
WITH USER_MODE or equivalent user-mode patternsList<Account> accounts = [ SELECT Id, Name FROM Account WHERE Id IN :accountIds WITH USER_MODE];SObjectAccessDecision decision = Security.stripInaccessible( AccessType.UPDATABLE, recordsToSave);Database.update(decision.getRecords(), false, AccessLevel.USER_MODE);The key idea is simple: the framework should not need privilege escalation to make normal agent behavior work. If an action or runtime path needs higher privilege, that should be an explicit architectural decision with monitoring and ownership, not an invisible default.
Service-user routing can be powerful, but it changes the trust model. When you use it, you are no longer saying “this user can do this through an agent.” You are saying “this system path may run under a different identity for a controlled reason.” That deserves explicit review.
Masking reduces the chance that raw sensitive values leave Salesforce.
PIIMaskingPreset__cProvider adapters can run safety behavior before or during model interaction.
PromptSafetyPreset__cThe Tool Flow Graph limits which tools the model can currently see.
These controls are strongest when they are treated as part of normal runtime design, not late-stage hardening. In practice:
| Mode | What it does | Best fit |
|---|---|---|
Confirmation | The user confirms directly in the conversation | lower-risk conversational writes |
Approval | The runtime pauses for a formal approval step | higher-risk or external side effects |
ConfirmationThenApproval | Requires chat confirmation before formal approval | conversational agents with elevated risk |
ConditionalApproval | The model indicates whether approval is needed at runtime | cases where risk depends on the proposed action |
Many AI security problems are really capability-design problems.
Safer capability design usually means:
If a capability is described as “update records” or “call external service,” the security problem is already upstream of the LLM.
The framework can provide strong control points, but production safety still depends on how teams use them:
Some of the framework’s operational rules are security-relevant even when they do not look like classic security features.
Salesforce transaction rules around DML and callouts shape the runtime. Those constraints reduce the chance of unpredictable mixed-state behavior and are one reason asynchronous execution is so common.
The framework uses turn identifiers aggressively so older async completions do not overwrite newer execution state. That matters for correctness, but it also matters for trust: the system should not silently apply stale tool outcomes after the conversation has moved on.
TransactionContext tracks when the runtime can safely continue in the same transaction and when it
should stop or pivot. That protects both reliability and the integrity of the execution trail.
Captured audit data can include:
This makes the framework easier to operate in production because the execution path is inspectable. Security and compliance teams usually care less about the existence of AI and more about whether a specific decision or action can be reconstructed later. These records are what make that possible.
If you are reviewing an agent before rollout, inspect these first:
AIAgentDefinition__c for prompts, provider binding, masking, safety, and runtime settingsAgentCapability__c for capability descriptions, schemas, exposure, async behavior, and HITL modeLLMConfiguration__c for provider adapter choice and credential bindingExecutionStep__c for what actually happened during representative runsInteractionMessage__c and InteractionSession__c when the workflow spans channel-based communicationThe question is not just “does the agent answer correctly?” It is also:
Start in sandbox
Validate realistic users, data visibility, approvals, and provider behavior before rollout.
Use least privilege
Keep users, service users, and approvers narrowly permissioned.
Protect side effects
Put confirmation or approval in front of record updates, email sends, external callouts, and other impactful actions.
Turn on trust layers deliberately
Add masking, safety, and tool gating based on the workflow risk instead of enabling everything blindly.
Watch execution data
Monitor failure rates, token spikes, repeated approval rejections, and unusual routing behavior.
The security story of this framework is strongest when you think of it as a governed runtime, not a chatbot with extra rules attached later.
The real control model is: