Use cases

Built for the work modern apps depend on.

Six common patterns SimpleQ replaces — with code you can copy today.

Use case

AI job processing

Pain

LLM calls are slow, fail intermittently, and hit token-per-minute rate limits at the worst moments. Doing them inline blocks user requests, kills timeouts, and silently drops work when a worker restarts.

How SimpleQ solves it

Enqueue every AI call. SimpleQ retries on 429s and 5xxs, respects per-key rate limits across your fleet, and gives you per-job logs with payloads, attempts, and latency.

ai-jobs/summarize.ts
ts
1await simpleq.enqueue({
2 queue: "ai-jobs",
3 type: "openai.chat",
4 payload: {
5 model: "gpt-4o-mini",
6 messages: [{ role: "user", content: text }]
7 },
8 retry: { maxAttempts: 5, backoff: "exponential" },
9 rateLimitKey: "openai-prod"
10});
Last 24h · ai-jobsLive
Throughput
12,402 / hr
Success rate
99.91%
Avg latency
812 ms
429s absorbed
1,184
Use case

Webhook delivery

Pain

When you call a customer's webhook, it might be down, slow, or rate-limited. Without retries with backoff and a dead-letter queue, you lose events — and trust.

How SimpleQ solves it

Enqueue webhook deliveries with idempotency keys, exponential backoff, and dead-letter handling. Replay failed deliveries from the dashboard with one click.

webhooks/deliver.ts
ts
1await simpleq.enqueue({
2 queue: "webhooks",
3 type: "http.request",
4 idempotencyKey: `evt_${event.id}`,
5 payload: {
6 url: customer.webhookUrl,
7 method: "POST",
8 body: event,
9 headers: { "X-Signature": sign(event) }
10 },
11 retry: { maxAttempts: 8, backoff: "exponential" }
12});
Last 24h · webhooksLive
Delivered
1,021,338
Retried
8,442
Dead-letter
12
P95 latency
412 ms
Use case

Bulk API sync

Pain

Syncing thousands of records to Stripe, Salesforce, or a CRM means hammering rate limits, watching jobs die halfway, and writing custom resume logic every time.

How SimpleQ solves it

Fan out work onto a queue with a per-provider rate limit. SimpleQ paces requests, retries failures, and resumes from the last attempt — no custom checkpointing required.

billing/sync.ts
ts
1for (const customer of batch) {
2 await simpleq.enqueue({
3 queue: "billing-sync",
4 type: "stripe.upsert-customer",
5 payload: customer,
6 rateLimitKey: "stripe-prod",
7 retry: { maxAttempts: 5, backoff: "exponential" }
8 });
9}
Last batch · billing-syncLive
Records
240,118
Throttled
Auto-paced
Failed
3
Duration
12m 04s
Use case

Scheduled jobs

Pain

Cron servers are single points of failure with no visibility, no retries, and a tendency to silently skip runs after a deploy.

How SimpleQ solves it

Schedule one-off or recurring jobs from your code. SimpleQ runs them at the right time, retries on failure, and shows you every past run.

reminders/schedule.ts
ts
1await simpleq.schedule({
2 queue: "reminders",
3 type: "email.send-followup",
4 payload: { userId: user.id },
5 runAt: addDays(new Date(), 3)
6});
7 
8await simpleq.schedule({
9 queue: "digests",
10 type: "email.daily-digest",
11 cron: "0 9 * * 1-5",
12 timezone: "America/Los_Angeles"
13});
Upcoming · remindersLive
Scheduled
84,201
Next run
in 3m 12s
Recurring
212
Missed
0
Use case

SMS and email workflows

Pain

Outbound messaging is easy to spam, easy to duplicate, and easy to land in spam folders when you ignore provider rate limits.

How SimpleQ solves it

Throttle outbound messages with per-provider rate limits and use idempotency keys to prevent duplicates across retries and worker restarts.

outbound/sms.ts
ts
1await simpleq.enqueue({
2 queue: "outbound-sms",
3 type: "twilio.sms",
4 idempotencyKey: `sms_${user.id}_${campaign.id}`,
5 payload: {
6 to: user.phone,
7 body: render(template, user)
8 },
9 rateLimitKey: "twilio-prod",
10 retry: { maxAttempts: 3, backoff: "exponential" }
11});
Last 24h · outbound-smsLive
Sent
84,920
Throttled
Auto
Duplicates blocked
1,204
Failed
8
Use case

API orchestration

Pain

Real workflows chain calls together: OpenAI → database → webhook → notification. Doing this inline means one failure breaks the whole chain.

How SimpleQ solves it

Enqueue each step as its own job. Failures retry independently. Each step is observable, replay-able, and idempotent.

workflows/summarize.ts
ts
1// step 1: summarize
2await simpleq.enqueue({
3 queue: "ai-jobs",
4 type: "openai.chat",
5 payload: { model: "gpt-4o-mini", input },
6 next: {
7 queue: "db",
8 type: "summary.persist",
9 next: {
10 queue: "webhooks",
11 type: "http.request",
12 payload: { url: customer.webhookUrl }
13 }
14 }
15});
Workflow · summarize-and-notifyLive
Steps
4
Throughput
1,420 / hr
Success rate
99.96%
Replays available
30 days

Ready to ship reliable async work?

Sign up free and have your first job running in under five minutes.