Cloudflare Workers Pricing Explained: When It Gets Expensive

This article contains affiliate links. We may earn a commission if you purchase through them, at no extra cost to you.

You deployed a Worker, watched the free tier handle your traffic with ease, and thought you’d cracked serverless forever. Then your side project got a mention on Hacker News, and you opened your Cloudflare dashboard to a number you weren’t expecting. Or maybe you’re still in evaluation mode, squinting at the pricing page trying to figure out whether “$0.50 per million requests” is actually as good as it sounds. Spoiler: it depends entirely on what your Workers are doing, not just how often they’re called.

This article breaks down Cloudflare Workers pricing in the way the official docs don’t — with the billing edge cases, the CPU time trap, and the specific workload profiles that turn a $5/month bill into a $200/month surprise.

TL;DR — Quick Verdict

Cloudflare Workers is genuinely cheap for lightweight, high-frequency tasks — think auth checks, redirects, A/B testing headers, and simple API proxies. It gets expensive fast when you have CPU-heavy logic, large response bodies, or workloads that need Durable Objects or KV at scale. The free tier is legitimately useful. The paid tier’s gotcha isn’t the per-request cost — it’s the CPU time billing introduced in 2023 that most devs don’t notice until they’re already over budget.

How Cloudflare Workers Pricing Actually Works

There are two plans: Free and Workers Paid (part of the broader Cloudflare Workers platform, formerly called “Bundled” and “Unbound”). Here’s what each gives you:

Free Plan

  • 100,000 requests/day (resets daily, not monthly — this matters)
  • 10ms CPU time per invocation
  • No charge for KV reads beyond the free allowance (1 million reads/day)
  • No Cron Triggers on Free (you get one, actually, but with limitations)

Workers Paid — $5/month base

  • 10 million requests included
  • $0.50 per additional million requests
  • 30 million CPU milliseconds included
  • $0.02 per additional 1,000 CPU milliseconds — this is the one to watch
  • Longer CPU time limits: up to 30 seconds wall-clock time (but CPU time is still metered)

That CPU millisecond pricing is where things get interesting. Let’s do some real math.

The CPU Time Trap: Real Numbers

Say you have a Worker that does something moderately complex — parses a JWT, hits Cloudflare KV, transforms a JSON payload, and returns a response. That might take 5–15ms of actual CPU time per request. At 10 million requests/month, you’re burning through 50–150 million CPU milliseconds.

Your included allowance is 30 million CPU ms. At 150 million CPU ms total:

  • Overage: 120 million CPU ms
  • Cost: 120,000 × $0.02 = $2,400/month
  • Plus the base $5, plus request overages if you’re past 10M

That’s not a horror story I’m inventing — it’s the math on a real-world API middleware layer running at modest scale. The request cost looks trivial ($0.50/million), but CPU time billing at scale is a completely different conversation.

Now flip it: if your Worker does a simple redirect or header rewrite — maybe 0.5ms CPU time — those same 10 million requests only burn 5 million CPU ms. You’re well inside the included 30 million. Your bill stays at $5. This is why workload type is everything.

CPU Time vs Wall-Clock Time

This distinction trips up almost every developer new to Workers. Wall-clock time is how long your Worker takes to complete, including time spent waiting on external fetch() calls, KV reads, or D1 queries. CPU time is only the time the V8 isolate is actually executing your JavaScript.

If your Worker calls an external API and waits 400ms for a response, that 400ms of waiting is not billed as CPU time. You’re only billed for the milliseconds your code is actually running. This is actually good news for I/O-heavy Workers — a Worker that makes a slow upstream API call but does minimal processing is cheap to run.

The expensive Workers are ones doing real computation: image processing logic, cryptographic operations, large JSON parsing/transformation, running WASM modules, or anything with non-trivial loops over large datasets.

Get the dev tool stack guide

A weekly breakdown of the tools worth your time — and the ones that aren’t. Join 500+ developers.



No spam. Unsubscribe anytime.

Cloudflare Workers Pricing: Full Breakdown Table

Feature Free Workers Paid ($5/mo)
Requests included 100K/day 10M/month
Overage requests Blocked $0.50/million
CPU time included 10ms/invocation 30M CPU ms/month
CPU time overage Worker killed at 10ms $0.02/1,000 CPU ms
Max CPU time/request 10ms 30 seconds (wall-clock)
KV reads 1M/day free 10M/month, then $0.50/million
KV writes 1K/day free 1M/month, then $5/million
Durable Objects Not available $0.15/million requests + storage
D1 (SQLite at edge) 5M rows read/day 25B rows read/month, then $0.001/million
R2 storage 10GB free $0.015/GB beyond 10GB

The Add-Ons That Sneak Up On You

Durable Objects

Durable Objects are powerful — they give you strongly consistent, stateful coordination at the edge, which is genuinely hard to replicate elsewhere. But they have their own billing layer entirely separate from Workers. You pay for:

  • Requests to the DO: $0.15/million
  • Duration: $12.50 per million GB-seconds (how long the DO is active × memory used)
  • Storage: $0.20/GB-month

If you’re building a real-time collaborative app or a WebSocket-heavy service using Durable Objects, model your expected active sessions carefully before you deploy. A few thousand concurrent long-lived connections can rack up duration costs fast.

Workers KV — The Write Cost Nobody Talks About

KV reads are cheap. KV writes are $5 per million after the free tier. If your architecture is writing frequently to KV (caching user sessions, updating feature flags per-request, logging anything), you will feel this. KV is designed for read-heavy workloads with infrequent writes. If you’re writing on every request, you’re using it wrong and you’ll pay for it.

R2 Egress (Or Lack Thereof)

R2 is legitimately great for one specific reason: no egress fees. If you’re currently paying AWS S3 egress costs and serving assets through Workers, switching to R2 can be a meaningful saving. This is one area where Cloudflare’s pricing is genuinely competitive without asterisks.

Workload Profiles: Will Your Worker Be Cheap or Expensive?

Cheap Workers (stay on free or low paid costs)

  • Redirects and rewrites: <1ms CPU, pure routing logic
  • Header manipulation: Adding CORS headers, security headers, auth checks that hit KV once
  • A/B testing: Cookie read + simple conditional + pass-through
  • Bot detection: IP/UA checks with minimal logic
  • Simple API proxies: fetch() to upstream, return response — most time is I/O, not CPU

Expensive Workers (model carefully before committing)

  • Image transformation: Even basic resizing via WASM is CPU-intensive
  • Cryptographic operations: Signing, verifying, hashing large payloads
  • Large JSON processing: Parsing and transforming multi-KB or MB payloads
  • Real-time data aggregation: Any in-Worker computation over arrays/objects
  • LLM API middleware: Streaming responses + token parsing + logging = CPU adds up
  • PDF/document generation: Anything running a WASM-compiled library

How to Estimate Your Bill Before You Deploy

Cloudflare’s dashboard shows CPU time per Worker invocation in the analytics tab. Before you scale, run your Worker under realistic load locally using wrangler dev and check the CPU time logged per request. Then do the math:

# Formula:
# Monthly cost = $5 base
#   + max(0, (total_requests - 10M) / 1M) * $0.50
#   + max(0, (avg_cpu_ms * total_requests) - 30,000,000) / 1000 * $0.02

# Example: 50M requests/month, 8ms avg CPU time
requests = 50_000_000
avg_cpu_ms = 8
total_cpu_ms = requests * avg_cpu_ms  # 400,000,000

request_overage = max(0, (requests - 10_000_000) / 1_000_000) * 0.50  # $20
cpu_overage = max(0, (total_cpu_ms - 30_000_000) / 1000) * 0.02  # $7,400

total = 5 + request_overage + cpu_overage  # $7,425/month

That’s a real scenario for a mid-scale API doing non-trivial processing. At 50M requests with 8ms CPU each, you’re looking at over $7K/month. The same 50M requests at 0.5ms CPU each? About $25/month. CPU time is the variable that matters most.

When to Consider Alternatives

Cloudflare Workers is not always the right answer, and I’ll say that plainly. If your workload is CPU-heavy or you need persistent compute (not just edge logic), you’re probably better served by a traditional server or container setup.

For side projects and hobby workloads, I’ve had good results keeping heavier compute on DigitalOcean — a $6/month Droplet running a lightweight Node or Go service will handle a lot of CPU-intensive work that would cost 10x on Workers at scale. Use Workers for what they’re actually good at: edge routing, caching logic, auth middleware, and geographic distribution. Push the heavy lifting to a real server. (We compared hosting options in depth in our Best Cloud Hosting for Side Projects 2026 guide if you want to see the full breakdown.)

If you’ve been burned by a platform migration before, our piece on migrating 14 projects off Heroku is worth reading — the same architectural lessons apply when you’re deciding whether to go all-in on edge compute.

Use Cloudflare Workers If…

  • Your Worker does <5ms CPU work per request
  • You’re handling high request volume with lightweight logic (redirects, auth, routing)
  • You want zero cold starts and global distribution without managing infrastructure
  • You’re building on top of R2 and want to avoid egress fees
  • Your traffic is bursty and unpredictable — Workers scales to zero, no idle cost

Don’t Use Cloudflare Workers If…

  • Your logic requires >10ms CPU per request at scale (model this first)
  • You need persistent background processes or long-running tasks
  • Your team isn’t comfortable with the V8 isolate environment and its constraints (no native Node modules, limited filesystem access)
  • You’re doing heavy KV writes — the cost model punishes write-heavy patterns
  • You need GPU compute or any non-JS/WASM runtime

Practical Tips to Keep Your Workers Bill Under Control

  1. Profile CPU time before you scale. Use wrangler dev --local and check the CPU time output. Don’t guess.
  2. Cache aggressively. Use the Cache API inside Workers to avoid re-running expensive logic on repeat requests. A cache hit costs almost nothing.
  3. Offload heavy compute. If you need to do something CPU-intensive, call an external service (your own server, a Lambda, whatever) and let Workers handle the lightweight orchestration.
  4. Set spending limits. Cloudflare lets you set billing alerts and spending caps. Use them. There’s no excuse for a surprise bill.
  5. Watch KV write patterns. If you’re caching per-user data, consider whether you actually need to write to KV or whether you can use a response header + browser cache instead.
  6. Use D1 over KV for structured data. D1 reads are significantly cheaper at scale than KV reads for structured query patterns.

Final Recommendation

Cloudflare Workers pricing is genuinely excellent for the use cases it was designed for. The free tier is one of the most useful free tiers in the serverless space — 100K requests/day is enough to run a real project. The paid tier at $5/month is a no-brainer if you’re past the free limits.

The billing model becomes adversarial exactly when developers treat Workers like a general-purpose compute platform and deploy CPU-heavy logic without modeling costs first. The CPU millisecond pricing introduced in 2023 changed the calculus significantly — old blog posts and tutorials that say “Workers is basically free” are outdated and will get you in trouble.

My actual recommendation: deploy Workers for edge logic, auth, routing, and anything I/O-bound. For anything doing real computation, keep a cheap server in the mix — something like a DigitalOcean Droplet handles CPU-heavy tasks at a fraction of the Workers overage cost. The two work well together. You don’t have to choose one architecture for everything.

If you’re evaluating your broader developer toolchain while you’re in this planning phase, our Best AI Tools for Developers in 2026 roundup and the Best MCP Servers for Coding Agents guide cover tools that pair well with edge-first architectures — worth a read while you’re in architecture mode.

Get the dev tool stack guide

A weekly breakdown of the tools worth your time — and the ones that aren’t. Join 500+ developers.



No spam. Unsubscribe anytime.

Leave a Comment

Stay sharp.

A weekly breakdown of the tools worth your time — and the ones that aren't.

Join 500+ developers. No spam ever.