SimpleQ documentation
The execution layer for AI jobs, webhooks, and API tasks. Get from zero to your first reliable background job in under five minutes.
Quickstart
You'll install the SDK, authenticate with an API key, and enqueue your first job. Each step takes about a minute.
1. Install SDK
npm install @simpleq/sdk2. Initialize client
1import { SimpleQ } from "@simpleq/sdk";2 3const simpleq = new SimpleQ({4 apiKey: process.env.SIMPLEQ_API_KEY5});3. Enqueue a job
1await simpleq.enqueue({2 queue: "default",3 type: "webhook",4 payload: {5 url: "https://example.com/webhook",6 method: "POST",7 body: { event: "user.created" }8 }9});That's it — your job is queued. Open the dashboard to see it move from queued → running → completed.
Install SDK
SDKs are available for TypeScript / JavaScript, Python, and Go. They all share the same primitives and the same API surface.
# TypeScript / JavaScriptnpm install @simpleq/sdk # Pythonpip install simpleq # Gogo get github.com/simpleq/simpleq-goAuthentication
Generate an API key in the dashboard and pass it to the SDK as SIMPLEQ_API_KEY. Keys can be scoped to a single project and rotated at any time.
1const simpleq = new SimpleQ({2 apiKey: process.env.SIMPLEQ_API_KEY,3 project: "production"4});Create a queue
Queues are logical groupings of jobs. Create one per workload — ai-jobs, webhooks, outbound-sms — so you can scale and rate-limit them independently.
1await simpleq.queues.create({2 name: "ai-jobs",3 concurrency: 32,4 retry: { maxAttempts: 5, backoff: "exponential" }5});Enqueue a job
Every job has a queue, a type, and a payload. Optionally attach an idempotency key, retry policy, and rate-limit key.
1await simpleq.enqueue({2 queue: "ai-jobs",3 type: "openai.chat",4 idempotencyKey: `summary_${requestId}`,5 payload: {6 model: "gpt-4o-mini",7 messages: [{ role: "user", content: text }]8 },9 retry: { maxAttempts: 5, backoff: "exponential" },10 rateLimitKey: "openai-prod"11});Schedule a job
Schedule a one-off job at a specific time, or a recurring job with a cron expression and timezone. SimpleQ guarantees the job runs even across deploys.
1// one-off2await simpleq.schedule({3 queue: "reminders",4 type: "email.send-followup",5 payload: { userId: "u_123" },6 runAt: new Date(Date.now() + 1000 * 60 * 60 * 24)7});8 9// recurring10await simpleq.schedule({11 queue: "digests",12 type: "email.daily-digest",13 cron: "0 9 * * 1-5",14 timezone: "America/Los_Angeles"15});Retry policies
Configure max attempts, backoff strategy, and jitter per queue or per job. SimpleQ tracks every attempt and surfaces the full attempt history in the dashboard.
1{2 retry: {3 maxAttempts: 5,4 backoff: "exponential", // "fixed" | "exponential"5 initialDelayMs: 1000,6 maxDelayMs: 60_000,7 jitter: "full" // "none" | "equal" | "full"8 }9}When max attempts are exhausted, jobs land in the dead-letter queue for the queue they came from. Replay them from the dashboard or via the API.
Rate limits
Pass a rateLimitKey on enqueue to share a token bucket across all jobs that hit the same upstream. Configure the bucket once per key.
1await simpleq.rateLimits.set("openai-prod", {2 requestsPerSecond: 50,3 burst: 1004});Webhooks
SimpleQ ships a http.request job type that handles retries, signing, and timeouts for outbound webhooks.
1await simpleq.enqueue({2 queue: "webhooks",3 type: "http.request",4 idempotencyKey: `evt_${event.id}`,5 payload: {6 url: customer.webhookUrl,7 method: "POST",8 body: event,9 headers: { "X-Signature": sign(event) },10 timeoutMs: 30_00011 },12 retry: { maxAttempts: 8, backoff: "exponential" }13});OpenAI connector
The OpenAI connector handles auth, retries on 429s and 5xxs, and rate-limit pacing across your workers.
1await simpleq.enqueue({2 queue: "ai-jobs",3 type: "openai.chat",4 payload: {5 model: "gpt-4o-mini",6 messages: [{ role: "user", content: "Hello, world" }]7 },8 rateLimitKey: "openai-prod"9});API reference
The full REST API mirrors the SDK. Authenticate with Authorization: Bearer $SIMPLEQ_API_KEY.
POST /v1/jobs Enqueue a jobPOST /v1/jobs/:id/retry Retry a failed jobGET /v1/jobs/:id Get job statusPOST /v1/schedules Create a scheduleGET /v1/queues List queuesPOST /v1/rate-limits/:key Set rate limit