This demo demonstrates using Apollo Engine with the Graphcool API Gateway pattern. It contains a simple example with one endpoint, and an advanced example that stitches together two endpoints. It also contains an example that takes advantage of the new Apollo Cache Control.
-
Register on https://www.apollographql.com/engine/
-
Create a new service and note the API Key
- Create a
.env
file in the root of your project folder with the following keys:GRAPHCOOL_ENDPOINT
APOLLO_ENGINE_KEY
GRAPHCOOL_ENDPOINT=https://api.graph.cool/simple/v1/...
APOLLO_ENGINE_KEY=service:xxx:.......
-
Run
yarn install
ornpm install
-
Run
yarn start
ornpm start
- Open http://localhost:3000/playground and execute some queries
- Go over to the Apollo Engine website to check your metrics
- Unfortunately,
makeRemoteExecutableSchema
turns every query into a single request to the underlying API (our Graphcool API). This means the metrics will not show any useful data about how your query is actually executed by the Graphcool server. It does, however, give you an overall indication of relative performance.
The advanced example combines two different endpoints, one with Posts, and one with Comments. Now, the tracing from Apollo Engine becomes a lot more interesting. I selected two different regions to illustrate the difference between the two endpoints.
- Create a
.env
file in the root of your project folder with the following keys:GRAPHCOOL_POST_ENDPOINT
GRAPHCOOL_COMMENT_ENDPOINT
APOLLO_ENGINE_KEY
If you leave out the endpoint keys, it will use two demo endpoints (read-only). If you want to use your own endpoints, use the schemas from the
schemas
folder to set up your endpoints.
- Start with
yarn start:merged
ornpm start:merged
The caching example takes advantage of the new Apollo Cache Control standard, implemented by Apollo Server, and recognized by Apollo Engine. Based on caching hints delivered by Apollo Server, Apollo Engine applies intelligent caching to the queries.
-
Use the same set up as for the advanced example
-
Execute a query in the Playground. The first time, you will notice an extra result node, called
extensions
. This node contains caching hints.
- The second time the query runs, caching is applied by Apollo Engine, and the results are returned immediately. This is reflected in the Apollo Engine report. The first request took 792 ms, the second request 1 ms, thanks to the inmemory local cache from Apollo Engine: