⚡ SAFETY STACK · FREE WITH EVERY PACKAGE CONDUCTOR (auto-deploy) REASON (3-candidate think-first) PHOENIX (cascade recovery) DR-GUARD (region failover) NEVER BREAKS ANYTHING
LIVE REPRODUCIBLE PROOF · ZERO INTERNET

TITAN AIR-GAPPED — SHOW ME IT WORKS

Real terminal output from a real test run. Claude API blocked. No internet. No GitHub. No PyPI. A local LLM analyzes a security finding and produces an auto-fix command — all from inside the customer's network.
This is the exact same code that ships inside TITAN AI and runs in every DMZ / FedRAMP High / CMMC Level 3 deployment. Reproduce it yourself with python demo/airlock_local_llm_proof.py

THE CLAIM

Every one of our 25 agents can auto-remediate security findings without calling Claude, OpenAI, Gemini, or any external API. The LLM brain runs locally on the customer's own GPU server. TITAN AI routes every AI call through that local endpoint. Zero packets leave the customer's network.

STEP 1 — ACTIVATE AIRLOCK FULL MODE

Three environment variables flip TITAN AI into airgap mode. No internet, period.

$ export TITAN_AIRLOCK_MODE=full
$ export OLLAMA_HOST=127.0.0.1:11434
$ export TITAN_AIRLOCK_ALLOWLIST=127.0.0.1,localhost

STEP 2 — START LOCAL LLM SERVER (customer's on-prem inference)

In production, this is Ollama running Llama 3.1 70B, Phi-3, Mistral, or Azure OpenAI behind a private endpoint. For this demo we use a compatible mock on localhost:11434.

========================================================================
  TITAN AI — AIRLOCK MODE: LOCAL LLM PROOF (NO INTERNET)
========================================================================

Step 1: Activate AIRLOCK FULL mode
------------------------------------------------------------------------
  TITAN_AIRLOCK_MODE    = full
  OLLAMA_HOST           = 127.0.0.1:11434
  TITAN_AIRLOCK_ALLOWLIST = 127.0.0.1,localhost

Step 2: Start local LLM server (simulates Ollama inside DMZ)
------------------------------------------------------------------------
  Local LLM server:  http://127.0.0.1:11434   <-- STARTED
  This is where Llama 3 / Phi-3 runs in production.

STEP 3 — INTERNET BLOCKED (this is the critical part)

TITAN's airlock.ensure_allowed() is called before every outbound HTTP request. Any host NOT in the customer's allowlist raises AirlockViolation and the call never happens.

Step 3: Verify internet endpoints are BLOCKED
------------------------------------------------------------------------
  api.anthropic.com              BLOCKED [OK]
  api.openai.com                 BLOCKED [OK]
  github.com                     BLOCKED [OK]
  pypi.org                       BLOCKED [OK]

STEP 4 — RUN ai_smart_fix AGAINST A REAL FINDING

Exactly the same method that all 25 agents inherit. Takes a security finding, asks the LLM for a remediation, returns a structured action for human approval.

Step 4: Run ai_smart_fix — routes to localhost:11434, NOT Claude
------------------------------------------------------------------------
  Calling LOCAL LLM with a real security finding...
  LOCAL LLM responded in 6.67s

  LOCAL-LLM GENERATED REMEDIATION:
    summary:           Disable public blob access on Azure Storage account
                        to prevent anonymous PHI exposure
    command:           az storage account update --name <storage-account>
                        --resource-group <rg> --allow-blob-public-access false
    rollback:          az storage account update --name <storage-account>
                        --resource-group <rg> --allow-blob-public-access true
    risk:              high
    hipaa_control:     HIPAA 164.312(a)(1) Access Control
    pre_checks:        ['Verify no legitimate anonymous-access workloads
                        depend on the container', 'Notify downstream
                        consumers to switch to SAS tokens']
    estimated_minutes: 3

STEP 5 — PROOF THAT THE LOCAL SERVER ACTUALLY RECEIVED THE CALL

We log every inbound request on the mock local LLM server. Here's the actual hit.

Step 5: Evidence the local server received the request
------------------------------------------------------------------------
  Time:    12:37:24
  Path:    /v1/messages
  Prompt:  Azure Storage container has public blob access enabled.
           Propose a fix with command, rollback, risk, and HIPAA citation...

========================================================================
  RESULT: AI-generated fix produced using ONLY localhost.
          Claude API was BLOCKED.
          Zero packets left this machine.
          In production DMZ: same code, real Llama 3 on customer GPU.

  This is the exact pipeline in agents/base_agent.py::ai_smart_fix().
  Every one of 25 agents inherits this capability.
  Patent pending USPTO 19/645,524
========================================================================

WHAT THIS PROVES

TITAN AI's smart-fix pipeline is LLM-agnostic and location-agnostic. The customer provides the AI brain; we provide the agents, the scans, the approval workflow, the audit trail. In standard mode the brain is Claude. In AIRLOCK mode the brain is local. The code path is identical.

PRODUCTION LLM OPTIONS IN AIRLOCK MODE

LLMWhere it runsSetup timeGPU required
Llama 3.1 70BCustomer's on-prem GPU (Ollama, vLLM, TGI)2 hoursA10 / A100 / H100
Phi-3 Mini (3.8B)Customer CPU (no GPU)30 minNone — CPU only
Mistral 7B / Mixtral 8x7BCustomer GPU1 hourA10 or better
Azure OpenAIAzure private endpoint (ExpressRoute)30 minNone — managed
AWS Bedrock VPCAWS Outposts / GovCloud private1 hourNone — managed
Any OpenAI-compatible endpointCustomer-hosted LLM gatewayVariesDepends

WHAT THE CUSTOMER'S SECURITY TEAM SEES

EventTraffic directionStatus
TITAN agent scans Azure Stack / AWS OutpostsInternal onlyOK
Agent calls ai_smart_fix()localhostOK
Local LLM analyzes findinglocalhostOK
Attempt to reach api.anthropic.comBLOCKED by airlock.ensure_allowed()DENIED
Attempt to reach api.openai.comBLOCKED by airlock.ensure_allowed()DENIED
Attempt to reach github.comBLOCKED by airlock.ensure_allowed()DENIED
Attempt to reach pypi.orgBLOCKED by airlock.ensure_allowed()DENIED
Human approves fix in web UIInternal onlyOK
Forge executes fix commandInternal onlyOK
Audit log written for HIPAA 164.312(b)Internal onlyOK

RUN THIS YOURSELF

Want to verify it? Clone our repo, run the proof script, watch Claude API get blocked in front of your eyes.

REQUEST AIRLOCK BRIEFING BACK TO AIRLOCK