Inngest vs Trigger.dev vs Raw Vercel Cron: Background Work in AI Apps
Background jobs are where AI apps quietly fail. Cron is too dumb. Inngest and Trigger.dev fight for the slot. Here's what each one does, where they break, and what I actually pick.
Most AI apps don't fail in the chat interface. They fail in the background. The webhook that didn't retry. The long-running job that timed out. The scheduled task that silently stopped firing three weeks ago.
You can build background work three ways: raw Vercel cron, Inngest, or Trigger.dev. They are not the same and the wrong choice costs you debugging hours.
What raw Vercel cron gives you
Vercel cron is a scheduled trigger. You define a cron expression in vercel.json. At the scheduled time, Vercel hits an API route in your app. Your route runs the work.
That's it. That's the whole feature.
You get: - Scheduled execution - 10-min execution limit (60-min on enterprise tiers) - No retries (you build them) - No durable state (you persist yourself) - No fan-out (one trigger = one route hit) - Logs in Vercel's logging UI
For dead-simple periodic work — daily digest emails, weekly cleanup tasks, hourly health checks — Vercel cron is enough. Don't pull in a platform for something a cron expression can handle.
What Inngest gives you on top
Inngest is event-driven orchestration. You send events. Inngest invokes registered functions. Each function can have steps. Each step runs durably (Inngest checkpoints state). Retries are automatic. Concurrency is configurable. Dead letter queues are built in.
You get: - Event-driven invocation - Durable execution (step results are checkpointed) - Automatic retries with backoff - Concurrency control per function - Cron scheduling - Fan-out (one event invokes many functions) - Built-in observability and replay - Local dev mirrors production
The killer feature is `step.run()`. Each step is a checkpoint. If your function fails on step 5 of 12, it restarts on step 5, not step 1. For long AI pipelines this is huge.
What Trigger.dev gives you
Trigger.dev is also durable orchestration. The model is similar to Inngest with some differences.
You get: - Long-running tasks (effectively unlimited execution time) - File system persistence between steps - Run isolation (each run gets its own machine, more isolation than Inngest's serverless) - Built-in machine sizes (you pick CPU/RAM for the task) - Realtime task observability - Cron scheduling - Self-hostable
The killer feature for Trigger.dev is the machine model. Your task gets a real container with file system access. If you need to download a 500MB video, transcode it, and upload the result, Trigger.dev does this naturally. Inngest can do it but feels more constrained.
Where each one wins
Vercel cron wins when: - The task takes less than 5 minutes - You don't need retries (or you'll build them) - You don't need state between steps - It's truly periodic with no event-driven triggers
Inngest wins when: - The work is event-driven, not just scheduled - You need durable step-by-step execution - You're already on Vercel and want the same dev velocity - You have many small functions with fan-out patterns - Cost matters (Inngest's pricing scales gently)
Trigger.dev wins when: - Tasks need real machines (file system, custom binaries, ffmpeg, etc.) - Tasks are long (>15 min) and need isolated compute - You need to self-host the runtime - You want explicit CPU/RAM sizing per task
Where each one breaks
Vercel cron breaks the moment your task needs retries or state. You build it yourself. By the third retry pattern you've reinvented Inngest poorly.
Inngest breaks for tasks that need real file system or heavy binary tools. The serverless model is the constraint.
Trigger.dev breaks for very high-throughput small-task workloads. The machine model has more overhead per run than Inngest's serverless model. For 50,000 tiny tasks a day, Inngest is more efficient.
The actual AI-app patterns
For AI apps these are the common background patterns:
1. **Async LLM job.** User uploads doc. Server enqueues "process this." Background worker runs LLM, stores result, notifies user. INNGEST or TRIGGER. 2. **Scheduled scoring.** Every hour, run the lead-scoring model across new leads. VERCEL CRON or INNGEST. 3. **Long-running file processing.** Transcribe a 90-min audio, generate timestamps, store result. TRIGGER.DEV. 4. **Event-driven email sequence.** User signs up, fires 5 emails over 30 days, branches based on opens. INNGEST. 5. **Fan-out to N items.** New batch arrives with 200 items. Process each in parallel. INNGEST (good) or TRIGGER (also fine).
My defaults across these: - Simple periodic: Vercel cron - Async LLM jobs: Inngest - File-system heavy: Trigger.dev - Event-driven sequences: Inngest
Pricing comparison at scale
For 1,000 background runs/day with ~5 steps each (5k step runs):
- -Vercel cron: free (it's just route invocations on your existing plan)
- -Inngest: free tier covers it
- -Trigger.dev: free tier covers it
For 50,000 runs/day with 10 steps each (500k step runs):
- -Vercel cron: route invocation costs on Vercel (~$30-50/mo additional)
- -Inngest: roughly $50-100/mo
- -Trigger.dev: roughly $150-300/mo (more per-run overhead)
For 500,000 runs/day:
- -Vercel cron: not the right tool at this scale
- -Inngest: roughly $500-800/mo
- -Trigger.dev: roughly $1,000-2,000/mo
Inngest wins on high-throughput economics. Trigger.dev wins on long-tail durability and machine flexibility.
What I default to
Inngest, for most AI app background work.
The combination of step durability, observability, and serverless pricing fits the shape of AI work better than the alternatives. The dev loop is fast.
I use Trigger.dev when the task needs real file system access or unusually long runtimes. I use Vercel cron for the simplest scheduled jobs that don't need retries.
I avoid building durable background work from scratch on raw API routes. I've done it. It's always worse than picking a platform.
The thing nobody mentions
The observability is the actual product.
When a background AI job fails at 3 AM, you need to know why fast. Inngest and Trigger.dev both give you per-run logs, step-by-step replay, and error context. Vercel logs are flat and harder to trace.
If you're going to spend 15 hours debugging background failures across a year, the observability investment pays for itself.
What changes in 2026
Both Inngest and Trigger.dev are racing to add AI-specific features (model fan-out, prompt versioning, eval integration). The platform race is moving toward "the AI agent runtime" rather than just background jobs.
Vercel is adding their own primitives (Vercel Queues, fluid compute durability) that may eat into the lower end of this market.
The right answer in 12 months may be different. The right answer today is: don't build durability from scratch.
Want the full guide? Check out our deep-dive page for more context, FAQs, and resources.
read the full guide