Measure the carbon footprint of your AI
Capture AI/LLM telemetry for carbon tracking, cost attribution, and usage analytics — without capturing prompts or responses.
Privacy First
Enterprise-grade security by design. We capture only metadata — never content.
Prompts NEVER captured
Message content is never logged or transmitted to our systems.
Responses NEVER captured
Model outputs pass through unchanged and are never stored.
API keys NEVER logged
Your credentials are forwarded securely and never stored.
Only metadata collected
Model name, token counts, latency, and status — nothing else.
Supported Providers
OpenAI
GPT-4, GPT-3.5, and all OpenAI models
Anthropic / Claude
Claude 4, Claude 3.5, and all Claude models
Google Gemini
Gemini Pro, Gemini Flash, and all Gemini models
Mistral
Mistral Large, Medium, and all Mistral models
LangChain
Via callback handler — works with any LangChain provider
3-Line Integration
Add telemetry to your existing code without changing how you use your LLM client. Python SDK available now.
OpenAI
from openai import OpenAI
from tailpipe_ai import TailpipeAI
tailpipe = TailpipeAI(api_key="tp_xxx")
client = tailpipe.wrap(OpenAI())
# Use as normal — telemetry captured automatically
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
) Anthropic / Claude
from anthropic import Anthropic
from tailpipe_ai import TailpipeAI
tailpipe = TailpipeAI(api_key="tp_xxx")
client = tailpipe.wrap(Anthropic())
# Use as normal — telemetry captured automatically
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
) Deployment Modes
Choose the integration that fits your architecture.
| Integration | Use Case | Token Accuracy | Latency |
|---|---|---|---|
| Claude Code Streaming | Claude Code, Claude Max/Pro subscriptions | Exact (100%) | ~50ms |
| Lambda Mode | SDK/API batch processing | Exact (100%) | 50–200ms |
| Proxy Mode | High-volume, latency-sensitive | Estimated (~80%) | 0ms |
| Python SDK | Direct instrumentation | Exact (100%) | N/A |
Enterprise Ready
Data Residency
Choose where your telemetry data is stored: EU, US, APAC, or private deployment.
HMAC Signing
Cryptographic request signing for tamper-proof telemetry data.
SCI-AI Compliance
Captures all fields required by the Green Software Foundation SCI-AI specification for AI carbon tracking.
Standardised Telemetry
All integrations produce consistent records: model, tokens (input, output, cache), latency, status, and environment.
Architecture
Serverless infrastructure using AWS Lambda, Kinesis Firehose, and S3. Zero infrastructure to manage.
┌─────────────────┐ ┌──────────────────────────────────┐
│ Claude Code │────▶│ Lambda Function URL (Streaming) │
│ (OAuth/API) │ │ - Bearer token passthrough │
└─────────────────┘ │ - SSE token extraction │
│ - Firehose telemetry │
┌─────────────────┐ └──────────────────────────────────┘
│ SDK/API │────▶│ API Gateway + Lambda │
│ (API Key) │ │ - Exact token capture │
└─────────────────┘ │ - Streaming support │
└──────────────────────────────────┘
│
▼
┌──────────────────────────────────┐
│ Kinesis Firehose │
│ - Batching (60s / 5MB) │
│ - GZIP compression │
└──────────────────────────────────┘
│
▼
┌──────────────────────────────────┐
│ S3 (ai-telemetry-raw/) │
│ - Hive partitioning │
│ - year=/month=/day=/ │
└──────────────────────────────────┘
│
Daily Compaction
│
▼
┌──────────────────────────────────┐
│ S3 (ai-telemetry/) │
│ - Single daily files │
│ - telemetry-YYYYMMDD.jsonl.gz │
└──────────────────────────────────┘ Start tracking your AI carbon footprint
Get set up in minutes with our Python SDK or serverless proxy.