Built for the work modern apps depend on.
Six common patterns SimpleQ replaces — with code you can copy today.
AI job processing
LLM calls are slow, fail intermittently, and hit token-per-minute rate limits at the worst moments. Doing them inline blocks user requests, kills timeouts, and silently drops work when a worker restarts.
Enqueue every AI call. SimpleQ retries on 429s and 5xxs, respects per-key rate limits across your fleet, and gives you per-job logs with payloads, attempts, and latency.
1await simpleq.enqueue({2 queue: "ai-jobs",3 type: "openai.chat",4 payload: {5 model: "gpt-4o-mini",6 messages: [{ role: "user", content: text }]7 },8 retry: { maxAttempts: 5, backoff: "exponential" },9 rateLimitKey: "openai-prod"10});Webhook delivery
When you call a customer's webhook, it might be down, slow, or rate-limited. Without retries with backoff and a dead-letter queue, you lose events — and trust.
Enqueue webhook deliveries with idempotency keys, exponential backoff, and dead-letter handling. Replay failed deliveries from the dashboard with one click.
1await simpleq.enqueue({2 queue: "webhooks",3 type: "http.request",4 idempotencyKey: `evt_${event.id}`,5 payload: {6 url: customer.webhookUrl,7 method: "POST",8 body: event,9 headers: { "X-Signature": sign(event) }10 },11 retry: { maxAttempts: 8, backoff: "exponential" }12});Bulk API sync
Syncing thousands of records to Stripe, Salesforce, or a CRM means hammering rate limits, watching jobs die halfway, and writing custom resume logic every time.
Fan out work onto a queue with a per-provider rate limit. SimpleQ paces requests, retries failures, and resumes from the last attempt — no custom checkpointing required.
1for (const customer of batch) {2 await simpleq.enqueue({3 queue: "billing-sync",4 type: "stripe.upsert-customer",5 payload: customer,6 rateLimitKey: "stripe-prod",7 retry: { maxAttempts: 5, backoff: "exponential" }8 });9}Scheduled jobs
Cron servers are single points of failure with no visibility, no retries, and a tendency to silently skip runs after a deploy.
Schedule one-off or recurring jobs from your code. SimpleQ runs them at the right time, retries on failure, and shows you every past run.
1await simpleq.schedule({2 queue: "reminders",3 type: "email.send-followup",4 payload: { userId: user.id },5 runAt: addDays(new Date(), 3)6});7 8await simpleq.schedule({9 queue: "digests",10 type: "email.daily-digest",11 cron: "0 9 * * 1-5",12 timezone: "America/Los_Angeles"13});SMS and email workflows
Outbound messaging is easy to spam, easy to duplicate, and easy to land in spam folders when you ignore provider rate limits.
Throttle outbound messages with per-provider rate limits and use idempotency keys to prevent duplicates across retries and worker restarts.
1await simpleq.enqueue({2 queue: "outbound-sms",3 type: "twilio.sms",4 idempotencyKey: `sms_${user.id}_${campaign.id}`,5 payload: {6 to: user.phone,7 body: render(template, user)8 },9 rateLimitKey: "twilio-prod",10 retry: { maxAttempts: 3, backoff: "exponential" }11});API orchestration
Real workflows chain calls together: OpenAI → database → webhook → notification. Doing this inline means one failure breaks the whole chain.
Enqueue each step as its own job. Failures retry independently. Each step is observable, replay-able, and idempotent.
1// step 1: summarize2await simpleq.enqueue({3 queue: "ai-jobs",4 type: "openai.chat",5 payload: { model: "gpt-4o-mini", input },6 next: {7 queue: "db",8 type: "summary.persist",9 next: {10 queue: "webhooks",11 type: "http.request",12 payload: { url: customer.webhookUrl }13 }14 }15});Ready to ship reliable async work?
Sign up free and have your first job running in under five minutes.