You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,11 +26,9 @@ At a minimum, you will need:
26
26
- A Github account.
27
27
- An enterprise Apollo GraphOS account.
28
28
- You can use [a free enterprise trial account](https://studio.apollographql.com/signup?type=enterprise-trial) if you don't have an enterprise contract.
29
-
- Either:
30
-
- An account for:
31
-
- Google Cloud Platform (GCP).
32
-
- Amazon Web Services (AWS).
33
-
- A local Minikube setup.
29
+
- An account for either:
30
+
- Google Cloud Platform (GCP).
31
+
- Amazon Web Services (AWS).
34
32
35
33
Further requirements are noted within the [setup instructions](./docs/setup.md) as each type of environment (cloud vs. local) requires additional tooling.
36
34
@@ -41,10 +39,12 @@ Further requirements are noted within the [setup instructions](./docs/setup.md)
41
39
42
40
### [Setup](/docs/setup.md)
43
41
44
-
- Part A: Gather accounts and credentials
45
-
- Part B: Provision resources
46
-
- Part C: Deploy applications
42
+
During setup, you'll be:
43
+
44
+
- Gathering accounts and credentials
45
+
- Provisioning resources
46
+
- Deploying the applications, including router, subgraphs, client, and observability tools
47
47
48
48
### [Cleanup](/docs/cleanup.md)
49
49
50
-
- Part A: Automated cleanup
50
+
Once finished, you can cleanup your environments following the above document.
- [Minikube](https://minikube.sigs.k8s.io/docs/start/) configured according to the link
71
-
-[Helm](https://helm.sh/docs/intro/install/)
71
+
- [Helm](https://helm.sh/docs/intro/install/)-->
72
72
73
73
### Gather accounts
74
74
@@ -113,19 +113,19 @@ git pull
113
113
114
114
### Export all necessary variables
115
115
116
-
First, change directories in the cloud provider you wish to use (or minikube). All terraform is within the `terraform` root level folder, with each provider having a subfolder within. For the below examples, we'll assume GCP, however the others will use the same commands.
116
+
First, change directories in the cloud provider you wish to use. All Terraform is within the `terraform` root level folder, with each provider having a subfolder within. For the below examples, we'll assume GCP, however the others will use the same commands.
117
117
118
118
Next, make a copy of `.env.sample` called `.env` to keep track of these values. You can run `source .env` to reload all environment variables in a new terminal session.
119
119
120
120
```sh
121
-
# in either terraform/aws, terraform/gcp, or terraform/minikube
121
+
# in either terraform/awsor terraform/gcp
122
122
cp .env.sample .env
123
123
```
124
124
125
125
Edit the new `.env` file:
126
126
127
127
```sh
128
-
export PROJECT_ID="<your google cloud project id>"# if using AWS or minikube, you will not see this line and can omit this
128
+
export PROJECT_ID="<your google cloud project id>"# if using AWS, you will not see this line and can omit this
129
129
export APOLLO_KEY="<your apollo personal api key>"
130
130
export GITHUB_ORG="<your github account name or organization name>"
131
131
export TF_VAR_github_token="<your github personal access token>"
@@ -171,11 +171,11 @@ gh auth login
171
171
```
172
172
173
173
174
-
#### Minikube
174
+
<!--#### Minikube
175
175
176
176
```sh
177
177
gh auth login
178
-
```
178
+
```-->
179
179
180
180
#### General
181
181
@@ -204,7 +204,7 @@ You may need to clean up your Github packages before creating new repos of the s
204
204
205
205
**Note: If using a cloud provider, the following commands will create resources on your cloud provider account and begin to accrue a cost.** The reference infrastructure defaults to a lower-cost environment (small node count and instance size), however it will not be covered by either of GCP's or AWS's free tiers.
206
206
207
-
**Note: If you are using Minikube, this will not create a local cluster and instead configure the local environment to be ready to be deployed to.**
207
+
<!--**Note: If you are using Minikube, this will not create a local cluster and instead configure the local environment to be ready to be deployed to.**-->
208
208
209
209
```sh
210
210
# for example, if using GCP
@@ -270,7 +270,7 @@ After this completes, you're ready to deploy your subgraphs!
270
270
### Deploy subgraphs to dev
271
271
272
272
```sh
273
-
gh workflow run "Merge to Main" --repo $GITHUB_ORG/apollo-supergraph-k8s-subgraph-a
273
+
gh workflow run "Merge to Main" --repo $GITHUB_ORG/reference-architecture
274
274
gh workflow run "Merge to Main" --repo $GITHUB_ORG/apollo-supergraph-k8s-subgraph-b
275
275
# this deploys a dependency for prod, see note below
276
276
gh workflow run "Deploy Open Telemetry Collector" --repo $GITHUB_ORG/apollo-supergraph-k8s-infra
@@ -336,8 +336,8 @@ Follow the below instructions for your cloud provider you are using. Please note
336
336
337
337
```sh
338
338
kubectx apollo-supergraph-k8s-prod
339
-
ROUTER_IP=$(kubectl get ingress -n router -o jsonpath="{.*.*.status.loadBalancer.ingress.*.ip}")
340
-
open http://$ROUTER_IP
339
+
ROUTER_HOSTNAME=$(kubectl get ingress -n router -o jsonpath="{.*.*.status.loadBalancer.ingress.*.ip}")
340
+
open http://$ROUTER_HOSTNAME
341
341
```
342
342
343
343
Upon running the above commands, you'll have the Router page open and you can make requests against your newly deployed supergraph!
@@ -352,11 +352,33 @@ open http://$ROUTER_HOSTNAME
352
352
353
353
Upon running the above commands, you'll have the Router page open and you can make requests against your newly deployed supergraph!
354
354
355
-
### Client
355
+
### Deploy the client
356
+
357
+
The last step to getting fully configured is to deploy the client to both environments. To do so, we'll need our router ingress URL to point the client to. This can be pulled from the prior commands, so if you are using the same terminal session, feel free to skip the next set of commands.
This will create another ingress specific to the client, so much like the router, you can run the following commands depending on your provider.
395
+
This will create another ingress specific to the client, so much like the router, you can run the following commands depending on your provider. As with the other ingress, this may take a few minutes to become active.
0 commit comments