GraphQL Caching Is a Different Sport — Most Teams Miss It
HTTP caching assumptions break the moment you go GraphQL. Here's what actually works and what I've learned shipping it in production.
Most teams switching to GraphQL don't realize they've just opted out of a decade of HTTP caching infrastructure — until they're staring at a database on fire and a CDN that's doing absolutely nothing.
I've built GraphQL APIs for a few clients now — a biotech client doing lab data queries, a real estate platform hammering an MLS feed, an e-commerce backend — and the caching story is the part that consistently surprises people who come from a REST background. It's not that GraphQL caching is impossible. It's that everything you've internalized about caching doesn't apply, and the replacements require actual thought.
Why HTTP Caching Breaks at the Door
With REST, caching is almost accidental. You get a GET /products/42 and the URL itself is the cache key. Your CDN, your browser, your reverse proxy — everyone agrees on what that URL means and can stash the response. Slap a Cache-Control: public, max-age=3600 header on it and you're done. It's dumb and it works.
GraphQL blows this up. Almost every GraphQL API uses a single endpoint — POST /graphql — and the query is in the request body. CDNs don't cache POST requests by default, and even if they did, the cache key would need to be derived from the body content, not just the URL. Two requests to the same endpoint can ask for completely different data. There's no structural relationship between the URL and the data being fetched.
So the infrastructure that's been silently saving you for years? Gone.
What You Actually Have to Work With
There are a few real strategies. None of them are as hands-off as REST caching, but they're not magic either.
1. Persisted Queries (GET-based caching)
The cleanest CDN-compatible approach. Instead of sending the full query in the body, the client registers a query ahead of time and gets back a hash. Then it sends GET /graphql?operationId=abc123&variables={...}. Now your CDN can cache it because it's a GET with a predictable URL.
Apollo supports this out of the box. If you're on Lighthouse (the PHP GraphQL library I use with Laravel), you need to wire it up yourself or use the persisted queries extension.
The tradeoff: your client deployment and your server have to stay in sync on known queries. Manageable in a controlled environment (a mobile app, a first-party SPA), annoying if you have third-party consumers querying ad hoc.
2. Response Caching in the Resolver Layer
This is where I spend most of my time. Cache at the resolver level, not the HTTP level. When a resolver fetches data, check the cache first. Store results with a sensible TTL. Laravel's cache layer makes this straightforward.
// In a Lighthouse resolver, or a service class called by one
public function resolve($root, array $args): array
{
$cacheKey = 'products:' . md5(json_encode($args));
return Cache::remember($cacheKey, now()->addMinutes(10), function () use ($args) {
return Product::query()
->when(isset($args['category']), fn($q) => $q->where('category_id', $args['category']))
->when(isset($args['limit']), fn($q) => $q->limit($args['limit']))
->get()
->toArray();
});
}
Simple. Works. But you're caching the shaped data, not raw database rows — which means if the same product appears in two different queries with different field selections, you might cache redundant data. Acceptable. Not catastrophic.
3. DataLoader / Batching (the N+1 fix that also helps caching)
This isn't caching in the traditional sense but it's inseparable from performance at this layer. GraphQL's nested resolution pattern will absolutely destroy your database with N+1 queries if you don't batch. Lighthouse has a @with directive and a batched loader pattern. The in-memory deduplication within a single request is a form of per-request caching.
For the real estate client, I was resolving listings with nested agent data. Without batching, 50 listings meant 50 separate agent queries. With Lighthouse's batch loaders, it collapses to one. That's not disk or Redis caching — it's just not doing redundant work.
4. Object-Level Caching with Cache Hints
The Apollo ecosystem has a @cacheControl directive concept where individual types and fields declare their cache policy. The server computes the minimum TTL across the full response graph and sets an appropriate Cache-Control header. If one field in your query is "no-store", the whole response gets that treatment.
Lighthouse has a @cache directive that does something similar at the field level:
type Query {
product(id: ID!): Product @cache(maxAge: 600)
currentUser: User @cache(maxAge: 0, private: true)
}
This is elegant when it works. The catch is that your cache invalidation story now lives in your schema, which is either a beautiful single source of truth or a maintenance nightmare depending on your team's discipline.
The Gotchas That Bit Me
Cache key collisions across users. Early on I cached resolver responses without accounting for the authenticated user. An admin's query for orders and a customer's query for orders generated the same cache key because I was only hashing the arguments. The customer saw the admin's data. That one hurt. Now I always include the user ID (or a permission hash) in any cache key that touches access-controlled data.
$cacheKey = 'orders:' . auth()->id() . ':' . md5(json_encode($args));
Obvious in hindsight. Not obvious at 2am when you're porting a REST API to GraphQL and copy-pasting caching logic.
Mutation cache invalidation is your problem. REST gives you POST /products and you know to invalidate /products and /products/*. GraphQL mutations give you nothing structural. You have to manually tag and invalidate. I use Laravel's cache tags religiously here:
// On write
Cache::tags(['products', 'product:' . $product->id])->flush();
// On read
Cache::tags(['product:' . $args['id']])->remember(
'product:detail:' . $args['id'],
now()->addMinutes(30),
fn() => Product::find($args['id'])
);
This works well with Redis. It does not work with the file cache driver, which doesn't support tags. If you're on a shared host with file-based caching, you'll need a different strategy.
Subscriptions and cached data mixing. If you're running GraphQL subscriptions alongside queries, be careful about stale cache entries confusing real-time clients. I hit this on the biotech project — a subscription would push a new lab result, but a subsequent query would return the cached old value for thirty seconds. The solution was short TTLs for anything the subscription domain touched, not zero TTLs. You still want the cache; you just want it to be brief.
CDN caching with persisted queries still needs Vary headers. If you have authenticated and unauthenticated versions of the same persisted query, your CDN needs to vary on the Authorization header or you'll serve cached private data to anonymous users. This sounds obvious but CDN configs are easy to set-and-forget.
When I'd Reach for This
I'd set up resolver-level caching with Redis on any GraphQL API that has even moderate traffic. It's not optional — it's just part of the build. The DataLoader/batching pattern is mandatory from day one regardless of traffic; N+1 problems compound fast.
I'd add persisted queries and GET-based CDN caching if the client is a controlled first-party consumer (mobile app, internal SPA) and the data has meaningful public TTLs. For the real estate platform, listing detail pages are good CDN cache candidates. Agent profiles less so.
I wouldn't bother with persisted queries if I'm building a developer-facing API where third parties write arbitrary queries. The operational overhead of managing the query registry outweighs the caching benefit, and I'd rather invest in a good Redis resolver cache with short TTLs.
I also wouldn't reach for GraphQL at all if my data model is simple and flat. REST with proper Cache-Control headers, ETags, and a CDN is less work and less risk. GraphQL earns its complexity when you have a genuinely complex, nested data model with variable client needs — not when you want to look modern.
The Bottom Line
GraphQL trades your free HTTP caching lunch for flexibility. That's a reasonable trade for the right problem. But you have to show up and build the caching layer yourself — it won't happen by accident the way it does with REST. Know that going in, budget time for it, and you'll be fine.
Need help shipping something like this? Get in touch.