python demo/airlock_local_llm_proof.py
Every one of our 25 agents can auto-remediate security findings without calling Claude, OpenAI, Gemini, or any external API. The LLM brain runs locally on the customer's own GPU server. TITAN AI routes every AI call through that local endpoint. Zero packets leave the customer's network.
Three environment variables flip TITAN AI into airgap mode. No internet, period.
$ export TITAN_AIRLOCK_MODE=full $ export OLLAMA_HOST=127.0.0.1:11434 $ export TITAN_AIRLOCK_ALLOWLIST=127.0.0.1,localhost
In production, this is Ollama running Llama 3.1 70B, Phi-3, Mistral, or Azure OpenAI behind a private endpoint. For this demo we use a compatible mock on localhost:11434.
======================================================================== TITAN AI — AIRLOCK MODE: LOCAL LLM PROOF (NO INTERNET) ======================================================================== Step 1: Activate AIRLOCK FULL mode ------------------------------------------------------------------------ TITAN_AIRLOCK_MODE = full OLLAMA_HOST = 127.0.0.1:11434 TITAN_AIRLOCK_ALLOWLIST = 127.0.0.1,localhost Step 2: Start local LLM server (simulates Ollama inside DMZ) ------------------------------------------------------------------------ Local LLM server: http://127.0.0.1:11434 <-- STARTED This is where Llama 3 / Phi-3 runs in production.
TITAN's airlock.ensure_allowed() is called before every outbound HTTP request. Any host NOT in the customer's allowlist raises AirlockViolation and the call never happens.
Step 3: Verify internet endpoints are BLOCKED ------------------------------------------------------------------------ api.anthropic.com BLOCKED [OK] api.openai.com BLOCKED [OK] github.com BLOCKED [OK] pypi.org BLOCKED [OK]
Exactly the same method that all 25 agents inherit. Takes a security finding, asks the LLM for a remediation, returns a structured action for human approval.
Step 4: Run ai_smart_fix — routes to localhost:11434, NOT Claude ------------------------------------------------------------------------ Calling LOCAL LLM with a real security finding... LOCAL LLM responded in 6.67s LOCAL-LLM GENERATED REMEDIATION: summary: Disable public blob access on Azure Storage account to prevent anonymous PHI exposure command: az storage account update --name <storage-account> --resource-group <rg> --allow-blob-public-access false rollback: az storage account update --name <storage-account> --resource-group <rg> --allow-blob-public-access true risk: high hipaa_control: HIPAA 164.312(a)(1) Access Control pre_checks: ['Verify no legitimate anonymous-access workloads depend on the container', 'Notify downstream consumers to switch to SAS tokens'] estimated_minutes: 3
We log every inbound request on the mock local LLM server. Here's the actual hit.
Step 5: Evidence the local server received the request
------------------------------------------------------------------------
Time: 12:37:24
Path: /v1/messages
Prompt: Azure Storage container has public blob access enabled.
Propose a fix with command, rollback, risk, and HIPAA citation...
========================================================================
RESULT: AI-generated fix produced using ONLY localhost.
Claude API was BLOCKED.
Zero packets left this machine.
In production DMZ: same code, real Llama 3 on customer GPU.
This is the exact pipeline in agents/base_agent.py::ai_smart_fix().
Every one of 25 agents inherits this capability.
Patent pending USPTO 19/645,524
========================================================================
TITAN AI's smart-fix pipeline is LLM-agnostic and location-agnostic. The customer provides the AI brain; we provide the agents, the scans, the approval workflow, the audit trail. In standard mode the brain is Claude. In AIRLOCK mode the brain is local. The code path is identical.
| LLM | Where it runs | Setup time | GPU required |
|---|---|---|---|
| Llama 3.1 70B | Customer's on-prem GPU (Ollama, vLLM, TGI) | 2 hours | A10 / A100 / H100 |
| Phi-3 Mini (3.8B) | Customer CPU (no GPU) | 30 min | None — CPU only |
| Mistral 7B / Mixtral 8x7B | Customer GPU | 1 hour | A10 or better |
| Azure OpenAI | Azure private endpoint (ExpressRoute) | 30 min | None — managed |
| AWS Bedrock VPC | AWS Outposts / GovCloud private | 1 hour | None — managed |
| Any OpenAI-compatible endpoint | Customer-hosted LLM gateway | Varies | Depends |
| Event | Traffic direction | Status |
|---|---|---|
| TITAN agent scans Azure Stack / AWS Outposts | Internal only | OK |
| Agent calls ai_smart_fix() | localhost | OK |
| Local LLM analyzes finding | localhost | OK |
| Attempt to reach api.anthropic.com | BLOCKED by airlock.ensure_allowed() | DENIED |
| Attempt to reach api.openai.com | BLOCKED by airlock.ensure_allowed() | DENIED |
| Attempt to reach github.com | BLOCKED by airlock.ensure_allowed() | DENIED |
| Attempt to reach pypi.org | BLOCKED by airlock.ensure_allowed() | DENIED |
| Human approves fix in web UI | Internal only | OK |
| Forge executes fix command | Internal only | OK |
| Audit log written for HIPAA 164.312(b) | Internal only | OK |
Want to verify it? Clone our repo, run the proof script, watch Claude API get blocked in front of your eyes.
REQUEST AIRLOCK BRIEFING BACK TO AIRLOCK