Remote Subschemas & Executors
Remote subschemas are the objects containing a non-executable GraphQLSchema
instance and an Executor
function that takes an object including the details for a GraphQL request.
Usually this executor is a function that uses HTTP to receive an ExecutionResult
from the service.
Configuring Subschemas
In the example above, the extra "subschema" wrapper objects may look verbose at first glance, but they are basic implementations of the SubschemaConfig
interface that accepts several additional settings (discussed throughout this guide):
export interface SubschemaConfig {
schema: GraphQLSchema
executor?: Executor
createProxyingResolver?: CreateProxyingResolverFn
rootValue?: Record<string, any>
transforms?: Array<Transform>
merge?: Record<string, MergedTypeConfig>
batch?: boolean
batchingOptions?: {
extensionsReducer?: (mergedExtensions: Record<string, any>, request: Request) => Record<string, any>
dataLoaderOptions?: DataLoader.Options<K, V, C>
}
}
Subschema config should directly provide as many settings as possible to avoid unnecessary layers of delegation. For example, while we could pre-wrap a subschema with transforms and a remote executor, that would be far less efficient than providing the schema
, transforms
, and executor
options directly to subschema config.
Also note that these subschema config objects may need to be referenced again in other stitching contexts, such as schema extensions. With that in mind, you may want to export your subschema configs from their module(s).
batch
flag enables Batch Execution
Remote Subschemas via HTTP
To include a remote schema in the combined gateway, you must provide at least the schema
and executor
subschema config options.
import { schemaFromExecutor } from '@graphql-tools/wrap'
import { buildHTTPExecutor } from '@graphql-tools/executor-http'
const remoteExecutor = buildHTTPExecutor({
endpoint: 'https://my.remote.service/graphql'
})
export const postsSubschema = {
schema: await schemaFromExecutor(remoteExecutor),
executor: remoteExecutor
}
schema
: this is a non-executable schema representing the remote API. The remote schema may be obtained using introspection, or fetched as a flat SDL string (from a server or repo) and built into a schema usingbuildSchema
(opens in a new tab). Note that not all GraphQL servers enable introspection, and those that do will not include custom directives.executor
: is a generic method that performs requests to a remote schema. It's quite simple to write your own. Subschema config uses the executor for query and mutation operations. See handbook example.
Executors
You can use ready-to-use executors from GraphQL Tools or write your own.
Executors are not responsible of validating the requests. By default, the validation is expected to be done on the gateway level based on the gateway request, because the remote APIs should already do the validation. However, if you still want to validate the request on the subschema level, you can use the
validateRequest
option of the subschema configuration objects.
HTTP Executor (@graphql-tools/executor-http
)
This package allows you to create an executor for your HTTP service with the following features:
- GraphQL over HTTP protocol (opens in a new tab) for queries and mutations
- Server-sent events (opens in a new tab) as a response type for subscriptions just like GraphQL Yoga (opens in a new tab) implementing it
- GraphQL over Server-Sent Events protocol (opens in a new tab) for subscriptions in a distinct connection mode.
- GraphQL multipart request specification (opens in a new tab) for file uploads and all other types of multipart requests
- RFC: GraphQL Defer and Stream Directives (opens in a new tab) for incremental delivery
- Different fetch strategies like
timeout
andretry
import { buildHTTPExecutor } from '@graphql-tools/executor-http';
const executor = buildHTTPExecutor({
endpoint: 'http://localhost:4001/graphql',
// optional and default is false
useGETForQueries: false,
// optional
headers: {
'authorization': 'Bearer MY-TOKEN'
},
// optional and default is POST
method: 'POST'
// Timeout and default is Infinity
timeout: Infinity
// Request Credentials (default: 'same-origin') [Learn more](https://developer.mozilla.org/en-US/docs/Web/API/Request/credentials)
credentials: 'same-origin',
// Retry attempts on fail and default is 0
retry: 0
})
Dynamic Headers and Endpoint URL
It is possible to change headers and endpoint URL dynamically. HTTP executor respects extensions
object passed in the request.
executor({
document: parse(/* GraphQL */ `
{
me {
id
username
name
}
}
`),
extensions: {
headers: {
Authorization: 'my-token'
},
endpoint: 'http://my-dynamic-endpoint.com/graphql'
}
})
GraphQL over WebSocket Protocol Executor (@graphql-tools/executor-graphql-ws
)
This package allows you to create an executor for the service that supports GraphQL over WebSocket protocol (opens in a new tab). WebSockets are usually used for subscriptions or permanent connection for queries and mutations in case of a heavy traffic.
import { buildGraphQLWSExecutor } from '@graphql-tools/executor-graphql-ws'
const executor = buildGraphQLWSExecutor({
endpoint: 'ws://localhost:4001/graphql',
connectionParams: {
myToken: 'my-token-for-auth'
}
})
Combining WS and HTTP executors to use WS only for subscriptions
import { buildGraphQLWSExecutor } from '@graphql-tools/executor-graphql-ws'
import { buildHTTPExecutor } from '@graphql-tools/executor-http'
import { getOperationAST } from 'graphql'
function buildCombinedExecutor() {
const httpExecutor = buildHTTPExecutor({
endpoint: 'http://localhost:4001/graphql'
})
const wsExecutor = buildGraphQLWSExecutor({
endpoint: 'ws://localhost:4001/graphql'
})
return executorRequest => {
if (executorRequest.operationType === 'subscription') {
return wsExecutor(executorRequest)
}
return httpExecutor(executorRequest)
}
}
Creating a custom executor
You can use an executor with any fetching algorithm that takes an ExecutorRequest
and returns ExecutionResult
. An executor is a function capable of retrieving GraphQL results. It is the same way that a GraphQL Client handles fetching data and is used by several graphql-tools
features to do introspection or fetch results during execution.
type Executor = (
executorRequest: ExecutorRequest
) => Promise<ExecutionResult | AsyncIterable<ExecutionResult>> | ExecutionResult
type ExecutorRequest = {
document: DocumentNode
variables?: Object
operationName?: string
operationType?: 'query' | 'mutation' | 'subscription'
context?: Object
info?: GraphQLResolveInfo
}
Create a custom HTTP executor using Fetch
import { fetch } from '@whatwg-node/fetch'
import { print } from 'graphql'
import { AsyncExecutor } from '@graphql-tools/utils'
const executor: AsyncExecutor = async ({ document, variables, operationName, extensions }) => {
const query = print(document)
const fetchResult = await fetch('http://example.com/graphql', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Accept: 'application/json'
},
body: JSON.stringify({ query, variables, operationName, extensions })
})
return fetchResult.json()
}
Create a hybrid executor to use WS for subscriptions
Sometimes you only want to do subscription operations over WebSocket. In that case, you have identified the operation and then call the corresponding executor for the operation type.
With this executor query and mutation operations will be executed over HTTP (using @whatwg-node/fetch
) and subscription operations will be executed via WebSocket (using graphql-ws
).
import { fetch } from '@whatwg-node/fetch'
import { print } from 'graphql'
import { observableToAsyncIterable, AsyncExecutor } from '@graphql-tools/utils'
import { createClient } from 'graphql-ws'
const HTTP_GRAPHQL_ENDPOINT = 'http://localhost:3000/graphql'
const WS_GRAPHQL_ENDPOINT = 'ws://localhost:3000/graphql'
const subscriptionClient = createClient({
url: WS_GRAPHQL_ENDPOINT
})
const httpExecutor: AsyncExecutor = async ({ document, variables, operationName, extensions }) => {
const query = print(document)
const fetchResult = await fetch(HTTP_GRAPHQL_ENDPOINT, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Accept: 'application/json'
},
body: JSON.stringify({ query, variables, operationName, extensions })
})
return fetchResult.json()
}
const wsExecutor: AsyncExecutor = async ({ document, variables, operationName, extensions }) =>
observableToAsyncIterable({
subscribe: observer => ({
unsubscribe: subscriptionClient.subscribe(
{
query: print(document),
variables: variables as Record<string, any>,
operationName,
extensions
},
{
next: data => observer.next?.(data as unknown),
error(err) {
if (!observer.error) return
if (err instanceof Error) {
observer.error(err)
} else if (err instanceof CloseEvent) {
observer.error(new Error(`Socket closed with event ${err.code}`))
} else if (Array.isArray(err)) {
// GraphQLError[]
observer.error(new Error(err.map(({ message }) => message).join(', ')))
}
},
complete: () => observer.complete?.()
}
)
})
})
const executor: AsyncExecutor = async executorRequest => {
// subscription operations should be handled by the wsExecutor
if (executorRequest.operationType === 'subscription') {
return wsExecutor(executorRequest)
}
// all other operations should be handles by the httpExecutor
return httpExecutor(executorRequest)
}
Introspecting Schemas using Executors
If you don't have the schema definitions as a URL or a local file, you can introspect the endpoint to fetch the schema.
After creating an executor, you can use schemaFromExecutor
to fetch the schema.
import { schemaFromExecutor } from '@graphql-tools/wrap'
import { buildHTTPExecutor } from '@graphql-tools/executor-http'
const executor = buildHTTPExecutor({
endpoint: 'http://localhost:4001/graphql'
})
const schema = await schemaFromExecutor(executor)
const subschema = {
schema,
executor
}
Batch Execution
Batch execution is a technique for consolidating multiple operations that target a single schema. Rather than executing each operation individually, all operations can be combined and executed as one.
For example, given the following GraphQL operations:
query ($arg: String) {
field1
field3(input: $arg)
}
query ($arg: String) {
tricky: field2
field3(input: $arg)
}
These can be merged into one operation, and the resulting data can be unpacked into the original shape of the multiple requests:
query ($_0_arg: String, $_1_arg: String) {
_0_field1: field1
_0_field3: field3(input: $_0_arg)
_1_tricky: field2
_1_field3: field3(input: $_1_arg)
}
Batch execution is useful because:
- Multiple operations can be combined into one network request when targeting remote services.
- Combined operations are guaranteed to multiplex, even with servers that execute incoming requests serially.
- Smaller and more granular GraphQL queries may be composed and cached individually, and then batched. This offers a strategy for sub-request caching.
Note: an alternative batching pattern transmits multiple operations as an array (opens in a new tab), and requires services to be specially configured for array inputs.
GraphQL Tools uses the plain GraphQL approach devised by Gatsby (opens in a new tab) and is compatible with any standard GraphQL service.
Batch the Executor (Query Batching)
Once you have an executor for your service, you may call it directly or wrap it with a batch executor using the createBatchingExecutor
method from @graphql-tools/batch-execute
:
Remember you don't need to create a batch executor using this method specifically, a subschema configuration can have
batch: true
flag instead.
import { createBatchingExecutor } from '@graphql-tools/batch-execute'
import { parse } from 'graphql'
const myExec = ({ document, variables }) => {
/* ... */
}
const myBatchExec = createBatchingExecutor(myExec)
// Perform a batch:
const [first, second] = await Promise.all([
myBatchExec({
document: parse('query($input:String) { a:field1 b:field2(input: $input) }'),
variables: { input: 'hello' }
}),
myBatchExec({
document: parse('query($input:String) { field2(input: $input) }'),
variables: { input: 'world' }
})
])
// query($_0_input: String, $_1_input: String) {
// _0_a: field1
// _0_b: field2(input: $_0_input)
// _1_field2: field2(input: $_1_input)
// }
When using a batch executor, remember that multiple calls must be performed synchronously and all results awaited as one. Awaiting the results of each batching call individually will behave like a normal executor.
Merging Algorithm
Batch merging uses several transformations to build a request:
- Replace root-level fragment spreads with inline fragments.
- Add uniquely prefixed aliases to all root-level fields.
- Uniquely prefix all variable definitions and their references.
- Uniquely prefix all fragment definitions and their spreads.
- Prune orphaned fragment definitions.
The results are then extracted with a series of reversals:
- Redistribute prefixed fields among original requests.
- Restore original root field aliases.
- Redistribute errors among their corresponding requests.
You can see this example that demonstrates the difference between batch execution and regular execution