Platform
Enterprise AI Security That Goes Beyond the Perimeter
Traditional AI platforms secure the API gateway and hope for the best. Agentica secures every layer — from input validation to agent reasoning to output verification. Four of our 17 architectures exist specifically to keep AI decisions safe, auditable, and under your control.
Built for Regulated Industries
Agentica is designed to meet the requirements of these frameworks. We partner with your compliance team to ensure your deployment meets your specific regulatory obligations.
Security Capabilities
Authentication & Session Management
Every API request is authenticated with industry-standard token-based security. User identities and conversation sessions are managed through separate token types.
Dual JWT token system via python-jose with HS256 signing. User tokens carry sub=user_id for identity operations. Session tokens carry sub=session_id for conversation operations. Tokens include standard JWT claims (sub, exp, iat, jti) with configurable expiration periods.
Input Validation & Sanitization
Every piece of data entering the platform is validated and sanitized before it reaches the AI agent. This prevents injection attacks, cross-site scripting, and prompt manipulation.
Multi-layer sanitization pipeline: HTML escaping, script tag removal, null byte stripping, email validation, password strength enforcement (min 8 chars with mixed case, numeric, special), message content validation (1–3000 chars). All request bodies validated through Pydantic schemas as DTOs.
Rate Limiting & Abuse Prevention
Intelligent rate limiting protects your deployment from abuse, denial-of-service attacks, and runaway costs. Limits are configured per-endpoint based on resource intensity.
Implemented via slowapi with IP-based identification. Chat: 30/min, Streaming: 20/min, Message retrieval: 50/min, Registration: 10/hour, Login: 20/min. Global defaults: 200/day, 50/hour. Environment-specific overrides adjust limits automatically.
Data Handling & Isolation
Your data stays within your control. Conversation state, user data, and agent memory are stored in your PostgreSQL instance with connection pooling and health monitoring.
Two connection paths: SQLModel/SQLAlchemy sessions for CRUD, psycopg3 AsyncConnectionPool for LangGraph checkpoints. Connection pooling with pool_pre_ping, 30-min recycle, configurable pool size (20+10 overflow). Environment isolation via priority-loaded .env files.
Audit Trails & Observability
Every request, every AI reasoning step, every tool call, every response is traced and logged. Your compliance team gets the audit trail they need.
Three layers: Structured logging (structlog) with context-bound user_id/session_id and JSON output in production. Prometheus metrics (HTTP requests, LLM latency, DB connections) with Grafana dashboards. Langfuse tracing with end-to-end agent reasoning chains and automated quality scoring.
Container Security & Deployment Hardening
Agentica runs as non-root containers with minimal attack surface. Health checks monitor application and database connectivity, automatically flagging degraded states.
Non-root execution via dedicated appuser. Minimal base image (python:3.13-slim). Health checks polling every 30s. 5-service Docker Compose on isolated bridge network. Secrets injected via environment variables at runtime, never baked into images.
AI That Governs Itself
Security does not stop at the API boundary. Four of Agentica’s 17 architectures are purpose-built for AI governance — ensuring agents make safe decisions, escalate when uncertain, and never act without appropriate oversight.
Human Approval Gateway
The agent generates its planned action, displays a complete preview, and waits for explicit human approval before executing. Nothing happens in the real world until a human says “go.” Critical for financial transactions, content publishing, and infrastructure changes.
Self-Aware Safety Agent
The agent maintains an explicit model of what it knows and what it does not know. Before every response, it assesses its own confidence. High confidence: answers directly. Uncertain: uses specialized tools. Out of its depth: escalates immediately to a human.
Self-Healing Pipeline
Every tool call and API response is verified before the agent proceeds. If verification fails, the system automatically replans with alternative approaches — up to 3 attempts. Bad data never cascades into bad decisions.
Risk Simulation Engine
Before committing to high-stakes actions, the agent forks the scenario into multiple independent simulations. It analyzes the distribution of outcomes across simulations before deciding. If variance is too high, it takes the conservative path.
These safety architectures can be composed with any other pattern. A Multi-Agent Specialist Team can use the Human Approval Gateway before publishing its report. A Knowledge Graph Intelligence agent can use the Self-Aware Safety Agent to escalate questions outside its domain. Safety is composable, not a checkbox.
Compliance Framework Alignment
| Framework | Relevance | How Agentica Supports It |
|---|---|---|
|
|
Trust services criteria | Structured audit logging (structlog JSON), access controls (JWT dual-token), availability monitoring (Prometheus + health checks), data isolation (environment separation), change management (CI/CD via GitHub Actions) |
|
|
Protected health information | Encryption in transit (HTTPS), access audit trails (Langfuse tracing), role-based access (user/session token separation), data isolation per patient/session, Metacognitive architecture for medical triage escalation |
|
|
Data subject rights | Conversation data scoped to user_id, session-level data deletion via clear_chat_history(), configurable data retention, right to explanation via Langfuse trace inspection, consent management via API |
|
|
Financial reporting controls | Human Approval Gateway (Dry-Run) for financial actions, audit trail for every AI decision, role separation between user and session authorization, non-repudiation via JWT jti claims |
|
|
Information security management | Risk assessment via Mental Loop simulation, access control via JWT, asset management via SQLModel entities, incident response via structured logging and health monitoring |
|
|
Government cloud security | Non-root containers, environment isolation, health check monitoring, configurable deployment (on-premise / air-gapped), no external data leakage (tool calls are configurable and auditable) |
Relevance
Trust services criteria
Support
Structured audit logging (structlog JSON), access controls (JWT dual-token), availability monitoring (Prometheus + health checks), data isolation (environment separation), change management (CI/CD via GitHub Actions)
Relevance
Protected health information
Support
Encryption in transit (HTTPS), access audit trails (Langfuse tracing), role-based access (user/session token separation), data isolation per patient/session, Metacognitive architecture for medical triage escalation
Relevance
Data subject rights
Support
Conversation data scoped to user_id, session-level data deletion via clear_chat_history(), configurable data retention, right to explanation via Langfuse trace inspection, consent management via API
Relevance
Financial reporting controls
Support
Human Approval Gateway (Dry-Run) for financial actions, audit trail for every AI decision, role separation between user and session authorization, non-repudiation via JWT jti claims
Relevance
Information security management
Support
Risk assessment via Mental Loop simulation, access control via JWT, asset management via SQLModel entities, incident response via structured logging and health monitoring
Relevance
Government cloud security
Support
Non-root containers, environment isolation, health check monitoring, configurable deployment (on-premise / air-gapped), no external data leakage (tool calls are configurable and auditable)
Ready to Discuss Security for Your Deployment?
Our team will walk through how Agentica meets your specific compliance requirements. Bring your security questionnaire — we have the answers.