log in
consulting hosting industries the daily tools about contact

PostGraphile: The Versioning Problem Nobody Warns You About

PostGraphile's magic is that your GraphQL schema lives in your database. That's also why versioning it will eventually bite you.

PostGraphile will generate a production-quality GraphQL API from your Postgres schema in about fifteen minutes. I've done it more than once and the first hour always feels like cheating. The part that feels less like cheating — and more like a trap you walked into with your eyes open — is what happens six months later when a client needs a breaking change and you've got mobile apps, third-party integrations, and a legacy dashboard all talking to the same endpoint.

This is the versioning problem. It catches every team eventually, and the PostGraphile docs are not particularly forthcoming about it.

Why PostGraphile is Different

With a hand-rolled GraphQL API, your schema is code. You version it like code. You add a v2 namespace, deprecate resolvers, run two schemas in parallel, whatever. The schema is yours to manipulate.

With PostGraphile, your schema is your database. Every table, view, function, and comment annotation in Postgres is reflected directly into GraphQL. That's the whole value proposition. A column rename in Postgres is a breaking change in your API. A dropped function is a dropped mutation. No middleware layer to absorb the blow.

I integrated PostGraphile for a biotech client a couple of years ago — sample tracking, lab workflow, that kind of thing. We chose it because their data model was complex and well-normalized and writing a thousand resolvers by hand felt like a waste. PostGraphile was great. Then they needed to expose a subset of the API to an external instrument vendor, and suddenly "the schema is the database" stopped being a feature.

The Specific Thing That Bites You

The failure mode isn't one big moment. It's incremental. It looks like this:

  1. You add a column. PostGraphile exposes it automatically. Nice.
  2. A developer writes a query against that column in the frontend.
  3. You realize the column name was wrong and rename it.
  4. The frontend breaks. You fix it.
  5. Three months later you have an external consumer you don't control. Now a rename breaks them and they don't tell you for two weeks.

Step 5 is where teams get into trouble. The reflex when you hit this is to reach for schema namespacing or endpoint versioning (/v1/graphql, /v2/graphql). That works, but PostGraphile doesn't give you that out of the box. You have to engineer it deliberately, and the earlier you do it the less it hurts.

How I Actually Handle This Now

I use three tools in combination: Postgres schemas, smart_tags, and a thin Express routing layer. None of them alone is sufficient.

Postgres Schemas as Version Boundaries

Postgres has first-class schema namespacing (public, api_v1, api_v2, etc.). PostGraphile can be pointed at specific schemas. I use this to control exactly what surface area is exposed.

-- api_v1 schema: stable, public-facing
CREATE SCHEMA api_v1;

-- Expose only what external consumers need via views
CREATE VIEW api_v1.sample AS
  SELECT
    id,
    legacy_barcode   AS barcode,   -- old column name, kept for v1 compat
    collected_at,
    status
  FROM public.sample
  WHERE deleted_at IS NULL;

GRANT SELECT ON api_v1.sample TO postgraphile_role;

The internal tables live in public. The api_v1 schema is a stable contract layer made of views and functions. When the underlying table changes, the view absorbs it. External consumers never see public directly.

Two PostGraphile Instances, One Express App

Then I run separate PostGraphile middleware instances pointed at different schemas, mounted at different routes.

// server.js (Node/Express — one of the few places I'm not in PHP)
const express = require('express');
const { postgraphile } = require('postgraphile');

const app = express();

const sharedOptions = {
  pgDefaultRole: 'postgraphile_anon',
  jwtSecret: process.env.JWT_SECRET,
  jwtPgTypeIdentifier: 'public.jwt_token',
  enableCors: false,
  disableQueryLog: process.env.NODE_ENV === 'production',
};

// v1: locked-down, external-facing, only api_v1 schema
app.use(
  '/v1/graphql',
  postgraphile(process.env.DATABASE_URL, 'api_v1', {
    ...sharedOptions,
    // No GraphiQL on the public endpoint
    graphiql: false,
    // Disallow introspection from untrusted callers if needed
    // (handle via auth middleware upstream)
  })
);

// v2/internal: full schema, internal apps only
app.use(
  '/graphql',
  postgraphile(process.env.DATABASE_URL, ['public', 'app'], {
    ...sharedOptions,
    graphiql: process.env.NODE_ENV !== 'production',
    enhanceGraphiql: true,
    allowExplain: (req) => req.headers['x-internal'] === process.env.INTERNAL_KEY,
  })
);

app.listen(3000);

This gives me two independent schemas at two URLs. Breaking changes to public don't touch /v1/graphql. I can iterate internally while the external contract stays frozen.

smart_tags for Deprecation Signaling

For softer deprecations — fields that still work but shouldn't be used — PostGraphile supports @deprecated via smart tags in column comments:

COMMENT ON COLUMN public.sample.legacy_barcode IS
  E'@deprecated Use barcode instead. Will be removed in v3.';

This surfaces in the GraphQL schema as a proper @deprecated directive that tools like GraphiQL and Apollo Client will highlight. It doesn't enforce anything, but it creates a paper trail and developers' tooling will warn them.

The Gotchas I Hit

View-based schemas don't get mutations automatically. When you expose api_v1.sample as a view, PostGraphile generates query types but not mutations. If external consumers need to write, you have to create explicit functions in api_v1:

CREATE FUNCTION api_v1.update_sample_status(sample_id int, new_status text)
RETURNS api_v1.sample
LANGUAGE sql STABLE SECURITY DEFINER AS $$
  UPDATE public.sample
  SET status = new_status
  WHERE id = sample_id
  RETURNING
    id,
    legacy_barcode AS barcode,
    collected_at,
    status;
$$;

More boilerplate than I'd like, but it's honest — mutations are where things get complicated and explicit is better than magic here.

Computed columns will trip you up. If you're using PostGraphile's computed column pattern (functions named table_column), those show up in GraphQL automatically too. I've had a function rename cause a breaking change I didn't notice until a client reported it. The solution is the same — put stable function signatures in api_v1 and let the internal schema evolve freely.

The @omit smart tag is your friend. Any table or column you're not ready to expose publicly should get @omit in the versioned schema, not in public. In public, omit nothing — your internal app needs the full picture. In api_v1, be explicit about what's visible:

COMMENT ON TABLE api_v1.internal_audit_log IS E'@omit';

Schema watch mode and migrations don't play well. In development, watchPg: true means PostGraphile reloads on schema changes. In production, if you're running migrations (I use Flyway for Postgres projects), the moment a migration runs your PostGraphile instances need to reload. I handle this with a lightweight health-check endpoint that triggers a SIGHUP to the PostGraphile process after migrations complete. Clunky but reliable.

When I'd Reach for This (and When I Wouldn't)

PostGraphile is genuinely excellent when:

  • Your data model is the product. Complex relational data, lots of filtering, cursor pagination — PostGraphile's connection types and where clauses handle this better than most hand-rolled APIs.
  • You control all the consumers (internal app, single mobile team, etc.).
  • You're moving fast and can afford a little schema debt early.

I'd think harder before using it when:

  • You're building a public API from day one with unknown third-party consumers. The schema-as-database model puts a lot of weight on your Postgres hygiene.
  • Your team's Postgres discipline is inconsistent. If column names and types are negotiable, a schema that directly reflects the database will be chaotic.
  • You need fine-grained deprecation timelines with formal notice periods. The tooling exists but you're doing more work than with a code-first API.

For the biotech project I mentioned, I'd make the same call again. The development velocity was worth the versioning overhead. For a client I have in e-commerce who needed a public product catalog API with partners ingesting it, I went code-first with Lighthouse (Laravel's GraphQL library) and didn't look back. Different constraints, different answer.

The Bottom Line

PostGraphile's automatic schema generation is a genuine productivity win, but "automatic" doesn't mean "free." The versioning problem isn't a PostGraphile bug — it's a consequence of the architectural choice you made when you pointed it at your database. Plan for stable schema layers from the start, use Postgres schemas as version boundaries, and you'll be fine. Wait until your first external consumer asks for a breaking change to think about this, and you'll be paying for it in overtime.

Related

Need help shipping something like this? Get in touch.