Batch API
The Batch endpoint allows you to submit multiple events in a single HTTP request. This is useful for data imports, historical backfills, and high-throughput server-side integrations where reducing HTTP overhead matters.
Endpoint
POST /v1/batchAuthentication
The Batch endpoint supports both authentication methods:
- HMAC-SHA256 (recommended for server-to-server) — same as the Server-Side Events endpoint
- Pipeline key via
Authorization: Bearer dk_...header
Request Format
The request body is a JSON object with a single batch array containing the events:
{
"batch": [
{
"type": "track",
"event": "Order Completed",
"userId": "user_001",
"properties": {
"order_id": "ORD-001",
"revenue": 99.99,
"currency": "USD"
},
"timestamp": "2026-02-24T10:00:00.000Z"
},
{
"type": "track",
"event": "Order Completed",
"userId": "user_002",
"properties": {
"order_id": "ORD-002",
"revenue": 249.50,
"currency": "USD"
},
"timestamp": "2026-02-24T10:05:00.000Z"
},
{
"type": "identify",
"userId": "user_001",
"traits": {
"email": "alice@example.com",
"name": "Alice Johnson",
"plan": "premium"
},
"timestamp": "2026-02-24T10:00:00.000Z"
}
]
}Each item in the batch array follows the same schema as a single event sent to the Server-Side Events endpoint.
Limits
| Constraint | Value |
|---|---|
| Maximum events per batch | 500 |
| Maximum payload size | 5 MB |
| Maximum individual event size | 64 KB |
If the batch exceeds 500 events or 5 MB, the entire request is rejected with a 400 Bad Request response. Split large imports into multiple requests.
Response
Success
When all events are accepted:
{
"success": true,
"accepted": 3
}Partial Failure
If some events in the batch fail validation while others succeed, the gateway still returns 200 but reports the failures:
{
"success": true,
"accepted": 2,
"errors": [
{
"index": 1,
"message": "Missing required field: type"
}
]
}Status codes: 200 on success (full or partial), 400 if the batch itself is malformed or exceeds limits, 401 for invalid authentication, 429 if rate-limited.
Example: Historical Backfill
Here is a Node.js example that reads events from a JSON file and submits them in batches of 500:
import crypto from "node:crypto";
import { readFileSync } from "node:fs";
const SECRET_KEY = "sk_live_your_secret_key";
const ENDPOINT = "https://data.example.com/v1/batch";
const BATCH_SIZE = 500;
async function sendBatch(events) {
const timestamp = Date.now().toString();
const body = JSON.stringify({ batch: events });
const signature = crypto
.createHmac("sha256", SECRET_KEY)
.update(`${timestamp}.${body}`)
.digest("hex");
const response = await fetch(ENDPOINT, {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-Signature": `sha256=${signature}`,
"X-Timestamp": timestamp,
},
body,
});
return response.json();
}
// Load events from a JSON file
const allEvents = JSON.parse(readFileSync("events.json", "utf-8"));
// Send in batches of 500
for (let i = 0; i < allEvents.length; i += BATCH_SIZE) {
const batch = allEvents.slice(i, i + BATCH_SIZE);
const result = await sendBatch(batch);
console.log(
`Batch ${Math.floor(i / BATCH_SIZE) + 1}: ${result.accepted} accepted`
);
if (result.errors?.length) {
console.warn("Errors:", result.errors);
}
}Example: Python Backfill
import hashlib
import hmac
import json
import time
import requests
SECRET_KEY = "sk_live_your_secret_key"
ENDPOINT = "https://data.example.com/v1/batch"
BATCH_SIZE = 500
def send_batch(events: list[dict]) -> dict:
timestamp = str(int(time.time() * 1000))
body = json.dumps({"batch": events}, separators=(",", ":"))
signature = hmac.new(
SECRET_KEY.encode(),
f"{timestamp}.{body}".encode(),
hashlib.sha256,
).hexdigest()
response = requests.post(
ENDPOINT,
headers={
"Content-Type": "application/json",
"X-Signature": f"sha256={signature}",
"X-Timestamp": timestamp,
},
data=body,
)
return response.json()
# Load events
with open("events.json") as f:
all_events = json.load(f)
# Send in batches
for i in range(0, len(all_events), BATCH_SIZE):
batch = all_events[i : i + BATCH_SIZE]
result = send_batch(batch)
print(f"Batch {i // BATCH_SIZE + 1}: {result['accepted']} accepted")Use Cases
| Use Case | Description |
|---|---|
| Historical backfill | Import past events from a data warehouse or CSV export |
| Data migration | Migrate events from another analytics platform |
| High-throughput ingestion | Reduce HTTP overhead by batching events from a busy server |
| Periodic sync | Batch events collected offline and send them on a schedule |
Best Practices
- Set accurate timestamps. For historical imports, always include the
timestampfield with the original event time. The processing layer and downstream vendors rely on this for attribution. - Include
messageIdfor deduplication. If you might retry a batch (e.g. after a network error), include amessageIdon each event so the Event Processor can deduplicate. - Respect rate limits. If you receive a
429response, back off using theRetry-Afterheader value before sending the next batch. - Monitor partial failures. Always check the
errorsarray in the response and handle or log failed events.