How lazy should a GraphQL resolver be?

3.1k views Asked by At

How lazy should a GraphQL resolver be?

For some context, here's a birds-eye of my architecture: GraphQL -> Resolvers -> |Domain Boundary| -> Services -> Loaders -> Data Sources (Postgres/Redis/Elasticsearch)

Past the domain boundary, there are no GraphQL specific constructs. Services represent the various dimensions of the domain, and resolvers simply process SomeQueryInput, delegate to the proper services, and then construct a proper SomeQueryResult with the operation results. All business rules, including authorization, live in the domain. Loaders provide access to domain objects with abstractions over data sources, sometimes using the DataLoader pattern and sometimes not.

Let me illustrate my question with a scenario: Let's say there's a User that has-a Project, and a Project has-many Documents. A project also has-many Users, and some users might not be allowed to see all Documents.

Let's construct a schema, and a query to retrieve all documents that the current user can see.

type Query {
  project(id:ID!): Project
}

type Project {
  id: ID!
  documents: [Document!]! 
}

type Document {
  id: ID!
  content: String!
}
{
  project(id: "cool-beans") {
    documents {
      id
      content
    }   
  }
}
Assume the user state is processed outside of the GraphQL context and injected into the context.

And some corresponding infrastructure code:

const QueryResolver = {
  project: (parent, args, ctx) => {
    return projectService.findById({ id: args.id, viewer: ctx.user });
  },
}

const ProjectResolver = {
  documents: (project, args, ctx) => {
    return documentService.findDocumentsByProjectId({ projectId: project.id, viewer: ctx.user })
  }
}

const DocumentResolver = {
  content: (parent, args, ctx) => {
    let document = await documentLoader.load(parent.id);
    return document.content;
  }
}


const documentService => {
  findDocumentsByProjectId: async ({ projectId, viewer }) {
    /* return a list of document ids that the viewer is eligible to view */
    return getThatData(`SELECT id FROM Documents where projectId = $1 AND userCanViewEtc()`)
  }
}

So the query execution would go: Resolve the project, get the list of documents the viewer is eligble to view, resolve the documents, and resolve their content. You can imagine the DocumentLoader being ultra-generic and unconcerned with business rules: Its sole job being to get an object of an ID as fast as possible.

select * from Documents where id in $1

My question revolves around documentService.findDocumentsByProjectId. There seems to be multiple approaches here: The service, as it is now, has some GraphQL knowledge baked into it: It returns "stubs" of the required objects, knowing that they will be resolved into proper objects. This strengthens the GraphQL domain, but weakens the service domain. If another service called this service, they'd get a useless stub.

Why not just have findDocumentsByProjectId do the following:

SELECT id, name, content FROM "Documents" JOIN permisssions, etc etc

Now the Service is more powerful and returns entire business objects, but the GraphQL domain has become more brittle: You can imagine more complex scenarios where the GraphQL schema is queried in a way the services don't expect, you end up with broken queries and missing data. You can also now just... erase the resolvers you wrote, as most servers will trivially resolve these already hydrated objects. You've taken a step back towards a REST-endpoint approach.

Additionally, the second method can leverage data source indexes intended for specific purposes, whereas the DataLoader uses a more brute force WHERE IN kind of approach.

How do you balance these concerns? I understand this is probably a big question, but it's something I've been thinking about a lot. Is the Domain Model missing concepts that could be useful here? Should the DataLoader queries be more specific than just using universal IDs? I struggle to find an elegant balance.

Right now, my services have both: findDocumentStubs, and findDocuments. The first is used by resolvers, the second used by other internal services since they can't rely on GraphQL resolution, but this doesn't feel quite right either. Even with DataLoader batching and caching, it still feels like someone is doing unecessary work.

3

There are 3 answers

0
AdventureBeard On BEST ANSWER

(Answering my own question after some research and synthesizing some of @Daniel's recommendations)

Let me try to address your core concern, which centers around fetching collections that fit some criteria. The friction you're feeling comes from fetching the collection of document ids, and then turning around and making a similar query to resolve the rest of the fields on those documents. I think it's reasonable to feel like this is duplicated effort at first, especially being new to GraphQL: Why didn't you eagerly grab all the needed fields from the database on that first query? There's a good reason:

Let's say we eagerly grab the document data that we "know" we'll need: Rather than fetching the list of ids in the ProjectResolver, and fetching again in the DocumentResolver to resolve the Documents, we eagerly fetch everything in the ProjectResolver, and then let our GraphQL server trivially resolve the Document fields. This seems to work fine, but we've moved the burden of Document resolution to the Project resolver. Let's add a type User with a field createdDocuments: [Document!]!.

type User {
  id: ID!
  name: String!
  createdDocuments: [Document!]!
}

What happens when you query created documents on User? Nothing helpful, unless we have the UserResolver fetch Document data too... By allowing a parent to be be the only source of data for their children, we force all future parents to do the same. This makes our GraphQL API brittle and hard to maintain and extend. If we just made ProjectResolver lazy and only return the bare minimum, and then force the DocumentResolver do all the work related to Documents, we don't have this problem.

There's still the itchy feeling from those two roundtrips to the DB. You can take the middle-path by leaning into your DataLoaders more and using cache priming. The Facebook JS DataLoader implementation has a method called prime(), which allows you to seed data into your loader's cache. If you're using a bunch of DataLoaders, you'll likely have multiple loaders referring to the same objects under different contexts. (This should feel familiar if you use Apollo Client for front-end work). When you fetch some object in one context, just prime it for the other contexts as a post-processing step.

When you fetch that list of documents for a project, go ahead and eagerly fetch the content as well, but use the results of that to prime the DocumentLoader. Now when your DocumentResolver starts, it'll have all this data ready for it, but will still be self-sufficient if there's no pre-fetched results. You'll have to use your best judgment when to do this based on your application's needs. You can also use Daniel Rearden's suggestion and use GraphQLResolveInfo to conditionally decide to pre-fetch like this, but make sure not to get stuck in the weeds doing micro-optimizations.

Imagine a scenario where you have two DataLoaders: ProjectDocumentsLoader and DocumentLoader. ProjectDocumentsLoader can prime DocumentLoader with its results as a post-processing step. I like to wrap my DataLoaders in a lightweight abstraction to deal with pre- and post-processing.


class Loader {
  load(id) {
    let results = await this.loader.load(id)
    return this.postProcess(results);
  }
  
  postProcess(data) {
    return data;
  }

  prime(key, value) {
    this.dataLoader.prime(key, value);
  }
}

class ProjectDocumentsLoader extends Loader {
  constructor(context) {
    this.context = context;
    this.loader = new DataLoader(/* function to get collection of documents by project */);
  }
  
  postProcess(documents) {
    documents.forEach(doc => this.context.documentLoader.prime(doc.id, doc));
    return documents;
  }
}

class DocumentLoader extends Loader {
  constructor(context) {
    this.context = context;
    this.loader = new DataLoader(/* function to get documents by id */);
  }
}

So final answer: Your GraphQL resolvers should be super lazy, with the option of pre-fetching so long as it's an optimization and never the source of truth.

2
Daniel Rearden On

If you're writing resolvers like this

function resolveFullName ({ first_name, last_name }) => {
  return `${first_name} ${last_name}`;
}

then you're arguably doing things wrong.

What you're effectively doing in that case is pulling the domain logic out of your domain layer and injecting it into your API layer. If you're following good practices for designing your database, then your data layer is going to be a normalized mess that can't be consumed directly. It's your domain layer's job to apply your business rules and transform that data into a shape that's then usable by other parts of your application.

You wrote:

You can also now just... erase the resolvers you wrote, as most servers will trivially resolve these already hydrated objects. You've taken a step back towards a REST-endpoint approach.

I don't think that's a fair assessment. You're still leveraging GraphQL to join the various domain objects returned by your services into a single graph. A client application can still make a single request to your API and get all the data it needs -- there's nothing REST-like about what you're doing.

If your concern is optimizing your database queries, then you certainly can leverage more complex DataLoader patterns to achieve that goal. The methods exposed by your services can also accept an array of fields as an argument, which would let you be more selective about which columns to select and which joins to make when "hydrating" your domain object. A GraphQL resolver can easily derive this array of fields from the GraphQLResolveInfo object its passed as its fourth parameter.

0
David Harkness On

I'm using this pattern of having field resolvers load the data they need in combination with data loaders to avoid duplicate queries, and it works nicely. You can extend it to the query resolvers to enable more parallel loading.

query project(id: "foo") {
  title
  documents {
    content
  }
}

As written, this will wait for the project to load before loading its documents.

project    [----------]
documents              [----------]

But the documents resolver already has what it needs at the start. If you move the project loading into the title resolver as you've already done for Document.content, everything is loaded in parallel.

project    [----------]
documents   [----------]