How lazy should a GraphQL resolver be?
For some context, here's a birds-eye of my architecture: GraphQL -> Resolvers -> |Domain Boundary| -> Services -> Loaders -> Data Sources (Postgres/Redis/Elasticsearch)
Past the domain boundary, there are no GraphQL specific constructs. Services represent the various dimensions of the domain, and resolvers simply process SomeQueryInput, delegate to the proper services, and then construct a proper SomeQueryResult with the operation results. All business rules, including authorization, live in the domain. Loaders provide access to domain objects with abstractions over data sources, sometimes using the DataLoader pattern and sometimes not.
Let me illustrate my question with a scenario: Let's say there's a User that has-a Project, and a Project has-many Documents. A project also has-many Users, and some users might not be allowed to see all Documents.
Let's construct a schema, and a query to retrieve all documents that the current user can see.
type Query {
project(id:ID!): Project
}
type Project {
id: ID!
documents: [Document!]!
}
type Document {
id: ID!
content: String!
}
{
project(id: "cool-beans") {
documents {
id
content
}
}
}
Assume the user state is processed outside of the GraphQL context and injected into the context.
And some corresponding infrastructure code:
const QueryResolver = {
project: (parent, args, ctx) => {
return projectService.findById({ id: args.id, viewer: ctx.user });
},
}
const ProjectResolver = {
documents: (project, args, ctx) => {
return documentService.findDocumentsByProjectId({ projectId: project.id, viewer: ctx.user })
}
}
const DocumentResolver = {
content: (parent, args, ctx) => {
let document = await documentLoader.load(parent.id);
return document.content;
}
}
const documentService => {
findDocumentsByProjectId: async ({ projectId, viewer }) {
/* return a list of document ids that the viewer is eligible to view */
return getThatData(`SELECT id FROM Documents where projectId = $1 AND userCanViewEtc()`)
}
}
So the query execution would go: Resolve the project, get the list of documents the viewer is eligble to view, resolve the documents, and resolve their content. You can imagine the DocumentLoader being ultra-generic and unconcerned with business rules: Its sole job being to get an object of an ID as fast as possible.
select * from Documents where id in $1
My question revolves around documentService.findDocumentsByProjectId. There seems to be multiple approaches here: The service, as it is now, has some GraphQL knowledge baked into it: It returns "stubs" of the required objects, knowing that they will be resolved into proper objects. This strengthens the GraphQL domain, but weakens the service domain. If another service called this service, they'd get a useless stub.
Why not just have findDocumentsByProjectId do the following:
SELECT id, name, content FROM "Documents" JOIN permisssions, etc etc
Now the Service is more powerful and returns entire business objects, but the GraphQL domain has become more brittle: You can imagine more complex scenarios where the GraphQL schema is queried in a way the services don't expect, you end up with broken queries and missing data. You can also now just... erase the resolvers you wrote, as most servers will trivially resolve these already hydrated objects. You've taken a step back towards a REST-endpoint approach.
Additionally, the second method can leverage data source indexes intended for specific purposes, whereas the DataLoader uses a more brute force WHERE IN kind of approach.
How do you balance these concerns? I understand this is probably a big question, but it's something I've been thinking about a lot. Is the Domain Model missing concepts that could be useful here? Should the DataLoader queries be more specific than just using universal IDs? I struggle to find an elegant balance.
Right now, my services have both: findDocumentStubs, and findDocuments. The first is used by resolvers, the second used by other internal services since they can't rely on GraphQL resolution, but this doesn't feel quite right either. Even with DataLoader batching and caching, it still feels like someone is doing unecessary work.
(Answering my own question after some research and synthesizing some of @Daniel's recommendations)
Let me try to address your core concern, which centers around fetching collections that fit some criteria. The friction you're feeling comes from fetching the collection of document ids, and then turning around and making a similar query to resolve the rest of the fields on those documents. I think it's reasonable to feel like this is duplicated effort at first, especially being new to GraphQL: Why didn't you eagerly grab all the needed fields from the database on that first query? There's a good reason:
Let's say we eagerly grab the document data that we "know" we'll need: Rather than fetching the list of ids in the ProjectResolver, and fetching again in the DocumentResolver to resolve the Documents, we eagerly fetch everything in the ProjectResolver, and then let our GraphQL server trivially resolve the Document fields. This seems to work fine, but we've moved the burden of Document resolution to the Project resolver. Let's add a type User with a field createdDocuments: [Document!]!.
What happens when you query created documents on User? Nothing helpful, unless we have the UserResolver fetch Document data too... By allowing a parent to be be the only source of data for their children, we force all future parents to do the same. This makes our GraphQL API brittle and hard to maintain and extend. If we just made ProjectResolver lazy and only return the bare minimum, and then force the DocumentResolver do all the work related to Documents, we don't have this problem.
There's still the itchy feeling from those two roundtrips to the DB. You can take the middle-path by leaning into your DataLoaders more and using cache priming. The Facebook JS DataLoader implementation has a method called prime(), which allows you to seed data into your loader's cache. If you're using a bunch of DataLoaders, you'll likely have multiple loaders referring to the same objects under different contexts. (This should feel familiar if you use Apollo Client for front-end work). When you fetch some object in one context, just prime it for the other contexts as a post-processing step.
When you fetch that list of documents for a project, go ahead and eagerly fetch the content as well, but use the results of that to prime the DocumentLoader. Now when your DocumentResolver starts, it'll have all this data ready for it, but will still be self-sufficient if there's no pre-fetched results. You'll have to use your best judgment when to do this based on your application's needs. You can also use Daniel Rearden's suggestion and use GraphQLResolveInfo to conditionally decide to pre-fetch like this, but make sure not to get stuck in the weeds doing micro-optimizations.
Imagine a scenario where you have two DataLoaders: ProjectDocumentsLoader and DocumentLoader. ProjectDocumentsLoader can prime DocumentLoader with its results as a post-processing step. I like to wrap my DataLoaders in a lightweight abstraction to deal with pre- and post-processing.
So final answer: Your GraphQL resolvers should be super lazy, with the option of pre-fetching so long as it's an optimization and never the source of truth.