Skip to content

Security

Loom is designed so security and governance live inside the runtime itself, not only in process documents, training, or admin convention.

Governance Trust Layers Production Safety

That means production safety is not a single switch. It is the combined effect of:

  • Salesforce user context and data access
  • narrow capability contracts
  • trust layers before prompts or tools leave Salesforce
  • human approval and confirmation for side effects
  • durable execution and message records for auditability

Core Security Layers

Salesforce security model

User context, sharing, CRUD, and field-level access remain the first boundary for all framework behavior.

Trust layers

Masking, prompt safety, tool gating, and approval controls reduce risk before an action is executed or a prompt leaves Salesforce.

Operational auditability

Executions and execution steps are persisted so teams can review what happened after the fact.

Controlled side effects

Human-in-the-loop settings let teams keep sensitive changes under explicit review.

The Security Model In Plain Language

The framework assumes that enterprise AI risk comes from more than one place.

Sometimes the risk is data exposure: too much record content is sent to a provider or shown to a user who should not see it.

Sometimes the risk is unsafe action execution: the model chooses a tool that can update records, send email, or call an external API without enough constraint.

Sometimes the risk is operational opacity: the system did something important, but nobody can reconstruct what happened later.

The core runtime addresses those risks in layers rather than pretending one control solves all of them.

Platform-Native Enforcement

Data access

  • queries should respect access through WITH USER_MODE or equivalent user-mode patterns
  • DML should enforce field accessibility and run in user mode where intended
  • the framework does not depend on privilege escalation to make normal agent behavior work
List<Account> accounts = [
SELECT Id, Name
FROM Account
WHERE Id IN :accountIds
WITH USER_MODE
];
SObjectAccessDecision decision = Security.stripInaccessible(
AccessType.UPDATABLE,
recordsToSave
);
Database.update(decision.getRecords(), false, AccessLevel.USER_MODE);

The key idea is simple: the framework should not need privilege escalation to make normal agent behavior work. If an action or runtime path needs higher privilege, that should be an explicit architectural decision with monitoring and ownership, not an invisible default.

Service-user routing

Service-user routing can be powerful, but it changes the trust model. When you use it, you are no longer saying “this user can do this through an agent.” You are saying “this system path may run under a different identity for a controlled reason.” That deserves explicit review.

Trust Layers

Masking reduces the chance that raw sensitive values leave Salesforce.

  • schema-aware and pattern-aware masking can be applied before provider calls
  • masking behavior is configured per agent through PIIMaskingPreset__c
  • masking is especially important when prompts include record content or user-entered data

These controls are strongest when they are treated as part of normal runtime design, not late-stage hardening. In practice:

  • masking reduces unnecessary exposure before provider calls
  • prompt safety provides an additional model-facing screening layer
  • tool flow gating narrows what the model can do at each moment
  • HITL protects high-impact actions from fully autonomous execution

Human-in-the-Loop Modes

ModeWhat it doesBest fit
ConfirmationThe user confirms directly in the conversationlower-risk conversational writes
ApprovalThe runtime pauses for a formal approval stephigher-risk or external side effects
ConfirmationThenApprovalRequires chat confirmation before formal approvalconversational agents with elevated risk
ConditionalApprovalThe model indicates whether approval is needed at runtimecases where risk depends on the proposed action

Designing Safe Capabilities

Many AI security problems are really capability-design problems.

Safer capability design usually means:

  • one clear business purpose per capability
  • narrow JSON schemas with required fields and enums where possible
  • backend configuration owned by admins, not invented by the model at runtime
  • separate read and write behaviors instead of combining them in one vague tool
  • approval or confirmation in front of record mutation, email sends, and external side effects

If a capability is described as “update records” or “call external service,” the security problem is already upstream of the LLM.

What Security Depends On Operationally

The framework can provide strong control points, but production safety still depends on how teams use them:

  • capabilities must stay narrow enough that review decisions are meaningful
  • approvers must know what they are expected to approve or reject
  • service users must be permissioned narrowly and monitored
  • execution data must be reviewed when repeated failures or risky patterns appear

Runtime Constraints That Also Protect You

Some of the framework’s operational rules are security-relevant even when they do not look like classic security features.

Callout safety

Salesforce transaction rules around DML and callouts shape the runtime. Those constraints reduce the chance of unpredictable mixed-state behavior and are one reason asynchronous execution is so common.

Stale-turn protection

The framework uses turn identifiers aggressively so older async completions do not overwrite newer execution state. That matters for correctness, but it also matters for trust: the system should not silently apply stale tool outcomes after the conversation has moved on.

Deferred DML boundaries

TransactionContext tracks when the runtime can safely continue in the same transaction and when it should stop or pivot. That protects both reliability and the integrity of the execution trail.

What the Runtime Records

Captured audit data can include:

  • user messages
  • assistant responses
  • tool calls and tool results
  • failure details and status transitions
  • token counts and estimated cost
  • tool rationale when enabled

This makes the framework easier to operate in production because the execution path is inspectable. Security and compliance teams usually care less about the existence of AI and more about whether a specific decision or action can be reconstructed later. These records are what make that possible.

What To Inspect During A Security Review

If you are reviewing an agent before rollout, inspect these first:

  • AIAgentDefinition__c for prompts, provider binding, masking, safety, and runtime settings
  • AgentCapability__c for capability descriptions, schemas, exposure, async behavior, and HITL mode
  • LLMConfiguration__c for provider adapter choice and credential binding
  • ExecutionStep__c for what actually happened during representative runs
  • InteractionMessage__c and InteractionSession__c when the workflow spans channel-based communication

The question is not just “does the agent answer correctly?” It is also:

  • what data can it see?
  • what actions can it take?
  • what identity does it run under?
  • what evidence exists after it runs?

Production Hardening Checklist

  1. Start in sandbox

    Validate realistic users, data visibility, approvals, and provider behavior before rollout.

  2. Use least privilege

    Keep users, service users, and approvers narrowly permissioned.

  3. Protect side effects

    Put confirmation or approval in front of record updates, email sends, external callouts, and other impactful actions.

  4. Turn on trust layers deliberately

    Add masking, safety, and tool gating based on the workflow risk instead of enabling everything blindly.

  5. Watch execution data

    Monitor failure rates, token spikes, repeated approval rejections, and unusual routing behavior.

Final Takeaway

The security story of this framework is strongest when you think of it as a governed runtime, not a chatbot with extra rules attached later.

The real control model is:

  • Salesforce permissions decide what the runtime is allowed to access
  • capability design decides what the model is allowed to attempt
  • trust layers reduce exposure and narrow action choice
  • HITL protects high-impact operations
  • durable records make the resulting behavior auditable

Continue