log in
consulting hosting industries the daily tools about contact

PostGraphile: When the Magic Stops Being Magic

PostGraphile auto-generates a GraphQL API from your Postgres schema in minutes. Then you need a plugin, and the clock stops.

PostGraphile's zero-config GraphQL generation is genuinely impressive. Point it at a Postgres schema and you get queries, mutations, subscriptions, and pagination — all wired up and typed correctly. I've demoed it to clients and watched their eyes light up. Then I needed to customize something real, reached for the plugin system, and spent two days reading source code on GitHub.

That gap — from "this is magic" to "why isn't this documented" — is the thing I want to talk about.

What PostGraphile Actually Does

If you haven't used it: PostGraphile introspects your Postgres database and generates a full GraphQL API. Tables become types, columns become fields, foreign keys become relationships, and functions become mutations or queries. It respects Postgres permissions, so row-level security works out of the box. For a well-designed schema, you can have a production-ready API in an afternoon.

I pulled it in for a biotech project a couple years back — they had a normalized LIMS-adjacent schema with about 40 tables. Getting the base API up took maybe three hours. Filtering, ordering, cursor pagination — all there. The client was thrilled. I was cautiously optimistic.

Then the requirements arrived.

"We need computed fields on certain types." Fine, Postgres functions handle that.

"We need a custom mutation that does three things transactionally and sends a webhook." Okay, stored procedure, doable.

"We need to reshape this nested query response before it hits the client." And that's where PostGraphile's magic hands you a wrench and says: welcome to the plugin system.

The Plugin Architecture (In Theory)

PostGraphile is built on a library called graphile-build, which uses a hook-based plugin system. Plugins tap into named hooks at build time to modify the GraphQL schema — adding fields, wrapping resolvers, changing types. The mental model is reasonable once you have it. The problem is getting to that mental model.

The hooks have names like GraphQLObjectType:fields, GraphQLObjectType:fields:field, build, inflection. They fire in a specific order. Each one receives a build object, a context object, and whatever the previous hook returned. You modify and return.

Here's a minimal plugin that adds a computed field to every type that has a created_at column — something like a human-readable "age" string:

// plugins/AddAgeFieldPlugin.js
const AddAgeFieldPlugin = (builder) => {
  builder.hook('GraphQLObjectType:fields', (fields, build, context) => {
    const {
      extend,
      graphql: { GraphQLString },
    } = build;

    const { Self } = context;

    // Only add to types that have a createdAt field
    if (!fields.createdAt) {
      return fields;
    }

    return extend(fields, {
      ageDescription: {
        type: GraphQLString,
        description: 'Human-readable age since creation',
        resolve(parent) {
          if (!parent.created_at) return null;
          const ms = Date.now() - new Date(parent.created_at).getTime();
          const days = Math.floor(ms / 86400000);
          if (days === 0) return 'today';
          if (days === 1) return 'yesterday';
          return `${days} days ago`;
        },
      },
    });
  });
};

module.exports = AddAgeFieldPlugin;

And you wire it in:

// server.js (Express)
const { postgraphile } = require('postgraphile');
const AddAgeFieldPlugin = require('./plugins/AddAgeFieldPlugin');

app.use(
  postgraphile(process.env.DATABASE_URL, 'public', {
    appendPlugins: [AddAgeFieldPlugin],
    graphiql: true,
    enhanceGraphiql: true,
  })
);

That works. It's not bad. But notice what I had to already know: that extend is the right way to merge fields (don't use spread), that build.graphql is where the GraphQL primitives live, that context.Self tells you which type you're currently modifying, and that the hook name is GraphQLObjectType:fields and not any of the several similar-looking alternatives.

None of that is in the main docs in a way that makes sense until you've already figured it out.

The Gotchas That Actually Bit Me

The extend function is not optional. Early on I spread the fields object instead of using build.extend. The schema would build, some things would work, and then I'd get mysterious errors about duplicate field definitions in development mode. PostGraphile uses extend internally to track hook contributions and detect conflicts. Use it.

Hook ordering matters and is not obvious. If you need your plugin to run after another plugin has added its fields, you need to specify that with provides and after arrays on the plugin function itself. Finding that API required reading the graphile-build source, not the PostGraphile docs.

AddAgeFieldPlugin.displayName = 'AddAgeFieldPlugin';
AddAgeFieldPlugin.provides = ['AddAgeField'];
AddAgeFieldPlugin.after = ['defaultMutations']; // run after built-in mutations

The makeExtendSchemaPlugin helper is your friend — but it's a different mental model. For adding custom queries and mutations, there's a separate helper that lets you write SDL and wire up resolvers. It's cleaner for that use case, but it's essentially a different API than the hook system:

const { makeExtendSchemaPlugin, gql } = require('graphile-utils');

const MyCustomMutationPlugin = makeExtendSchemaPlugin((build) => ({
  typeDefs: gql`
    extend type Mutation {
      processLabSample(sampleId: Int!, priority: String): LabSamplePayload
    }

    type LabSamplePayload {
      success: Boolean!
      jobId: String
    }
  `,
  resolvers: {
    Mutation: {
      async processLabSample(_parent, args, context) {
        const { sampleId, priority } = args;
        const { pgClient } = context;

        await pgClient.query('BEGIN');
        try {
          const { rows } = await pgClient.query(
            'SELECT * FROM lab.queue_sample($1, $2)',
            [sampleId, priority]
          );
          // fire webhook here
          await pgClient.query('COMMIT');
          return { success: true, jobId: rows[0].job_id };
        } catch (err) {
          await pgClient.query('ROLLBACK');
          throw err;
        }
      },
    },
  },
}));

This is actually pretty clean. But understanding when to use makeExtendSchemaPlugin versus writing a raw hook plugin — that distinction took me longer to internalize than I'd like to admit.

Inflection is a separate system you will need to touch. PostGraphile converts Postgres snake_case names to camelCase GraphQL names using an inflection layer. It's pluggable. The moment a client says "can we call this field patientDOB instead of patientDob", you're writing an inflector. It's not hard, but it's yet another surface area.

Error messages are often about the generated schema, not your plugin. When you make a mistake in a plugin, you frequently get a GraphQL schema validation error that points at a generated type, not at your code. Learning to read those errors backward into plugin bugs is a skill you develop slowly and painfully.

When I'd Reach for PostGraphile

I'd use PostGraphile when the schema is the product. If the business logic lives primarily in Postgres — views, functions, RLS policies, triggers — PostGraphile is excellent. It's also great for internal tools where the schema is stable and the team understands Postgres.

I used it on a real estate data project where almost all the interesting logic was in materialized views and Postgres functions. The plugin surface we needed was minimal. It was the right call. We shipped faster than if we'd hand-rolled a GraphQL API.

I'd skip PostGraphile when:

  • The team doesn't know Postgres well. You end up fighting two things at once.
  • Business logic is heavily in the application layer. The magic of auto-generation becomes friction when half your resolvers need custom behavior.
  • You need tight control over API versioning. PostGraphile's schema changes when your database schema changes. That's often fine, but not always.
  • The client is consuming the API from a mobile app with strict bandwidth constraints and you need heavily optimized, hand-shaped responses.

For greenfield projects where I control the schema and the client is a React app, I still consider it seriously. For integrations into existing systems — an EHR, a legacy ERP, something with a weird data model baked in by a vendor — I wouldn't bother. The impedance mismatch between what PostGraphile expects and what those schemas look like makes customization miserable.

Closing

PostGraphile is one of those tools that earns genuine enthusiasm for the first two weeks and then quietly charges you interest. The plugin system is powerful — I've shipped real production customizations with it — but the learning curve is steeper than the documentation admits. If you're going to use it past toy demos, budget time to read graphile-build source and accept that you'll be in Discord asking questions. It's worth it on the right project. Just don't let the magic trick you into underestimating the setup cost.

Related

Need help shipping something like this? Get in touch.