Security at TrustAI
AI agents act on your behalf. We take that responsibility seriously. Here is how we protect your data, your customers' data, and your business.
Multi-Tenant Isolation
- Every customer gets a dedicated environment — no shared containers between tenants
- Row Level Security (RLS) enforced at the database layer on every table
- Agent sessions are isolated by design: separate event logs, separate context windows
- Vault credentials scoped per session — no session can access another customer's integrations
- Supabase RLS policies enforce customer_id boundaries even if application code has a bug
Authentication & Access Control
- Supabase Auth with JWT validation on every server request (not getSession — getUser)
- Separate admin authentication via time-limited admin_token cookie — never mixed with customer auth
- Fail-closed middleware: any auth error redirects to login, never exposes protected pages
- OAuth scopes for agent integrations are least-privilege: agents request only the permissions they need
- All admin actions are logged to the audit trail with actor, timestamp, and action detail
Data Encryption
- Data at rest: AES-256 encryption via Supabase on all database tables and file storage
- Data in transit: TLS 1.2+ enforced on all connections — no unencrypted HTTP
- Knowledge base documents are encrypted in Qdrant vector storage
- Agent session logs are encrypted at rest and isolated per customer
- Backups are encrypted with the same AES-256 standard
Infrastructure
- Hosted on Vercel (EU region) with edge functions running in Frankfurt
- Supabase database in EU West region (Frankfurt) — EU data residency by default
- Qdrant vector database hosted in EU region
- Anthropic AI inference: data processed under Anthropic's DPA with Standard Contractual Clauses for transfers
- No customer data stored in logs outside EU without SCCs in place
Monitoring & Audit
- 680+ automated tests run on every code push — CI blocks deploys on failure
- Every agent action is logged to an immutable audit trail with full reasoning
- Approval queue creates a human-in-the-loop record for every agent decision reviewed
- Incident tracking in the Control Tower with severity classification and resolution status
- Automated security regression tests in CI prevent auth gate from being accidentally weakened
Compliance
- EU AI Act: agents classified by risk level; human oversight required for semi-autonomous actions
- GDPR: data processing agreements available; data subject rights supported; DPO contact available
- Sub-processor list published in Privacy Policy with transfer mechanisms documented
- SOC 2 Type II audit planned for Q3 2026
- ISO 27001 roadmap in progress for enterprise customers
Responsible Disclosure
We take security reports seriously and respond to all valid reports within 48 hours.
If you discover a security vulnerability in TrustAI's platform, APIs, or agent infrastructure, please report it responsibly. We commit to:
- Acknowledge your report within 48 hours
- Investigate and provide a status update within 7 days
- Notify you when the issue is resolved
- Credit you in our security acknowledgements (unless you prefer anonymity)
- Not pursue legal action against researchers acting in good faith
Report a vulnerability
security@trustai.mePlease do not publicly disclose vulnerabilities before we have had a chance to address them. We operate a coordinated disclosure policy with a 90-day remediation window.
Security questions or compliance enquiries? security@trustai.me