Skip to content
This repository has been archived by the owner on May 6, 2022. It is now read-only.

Commit

Permalink
Address documentation gaps (#607)
Browse files Browse the repository at this point in the history
* Moving the walk-through into a new file

Fixes #554

* Improve docs with more instruction

Most of this covers writing out a new .kubeconfig, required
namespace creation, and set up for direnv. However, some of these
changes are simply doing line wrapping at 80 columns.

Fixes #554
  • Loading branch information
Jeff Peeler authored and arschles committed Mar 23, 2017
1 parent fd1566e commit e6d16f2
Show file tree
Hide file tree
Showing 2 changed files with 428 additions and 258 deletions.
272 changes: 14 additions & 258 deletions docs/DEVGUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,9 @@ issue by adding a comment to it of the form:
#dibs

However, it is a good idea to discuss the issue, and your intent to work on it,
with the other members via the slack channel to make sure there isn't some
other work alread going on with respect to that issue.
with the other members via the [slack channel](https://kubernetes.slack.com/messages/sig-service-catalog)
to make sure there isn't some other work already going on with respect to that
issue.

When you create a pull request (PR) that completely addresses an open issue
please include a line in the initial comment that looks like:
Expand Down Expand Up @@ -211,263 +212,18 @@ export KUBECONFIG=/home/yippee/code/service-catalog/.kubeconfig
Use the [`catalog` chart](../charts/catalog) to deploy the service
catalog into your cluster. The easiest way to get started is to deploy into a
cluster you regularly use and are familiar with. One of the choices you can
make when deploying the catalog is whether to back the API server with etcd or
third party resources. Currently, etcd is the best option; TPR support is
experimental and still under development.
make when deploying the catalog is whether to make the API server store its
resources in an external etcd server, or in third party resources.

## Demo Walkthrough
If you choose etcd storage, the helm chart will launch an etcd server for you
in the same pod as the service-catalog API server. You will be responsible for
the data in the etcd server container.

The rest of this guide is a walkthrough that is essentially the same as a
basic demo of the catalog.
If you choose third party resources storage, the helm chart will not launch an
etcd server, but will instead instruct the API server to store all resources in
the Kubernetes cluster as third party resources.

Now that the system has been deployed to our Kubernetes cluster, we can use
`kubectl` to talk to the service catalog API server. The service catalog API
has four resources:
## Demo walkthrough

- `Broker`: a service broker whose services appear in the catalog
- `ServiceClass`: a service offered by a particular service broker
- `Instance`: an instance of a `ServiceClass` provisioned by the `Broker` for
that `ServiceClass`
- `Binding`: a binding to an `Instance` which is manifested into a Kubernetes
namespace

These resources are building blocks of the service catalog in Kubernetes from an
API standpoint.

----

#### Note: accessing the service catalog

Unfortunately, `kubectl` doesn't know how to speak to both the service catalog
API server and the main Kubernetes API server without switching contexts or
`kubeconfig` files. For now, the best way to access the service catalog API
server is via a dedicated `kubeconfig` file. You can manage the kubeconfig in
use within a directory using the `direnv` tool.

Additionally, you'll need to have a version 1.6 beta build of `kubectl` to execute
`create` operations on the service catalog API server. To get one, execute the following:

```console
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.3/bin/darwin/amd64/kubectl
chmod +x ./kubectl
```

For the rest of this document, we'll assume that all `kubectl` commands are using this newly
downloaded version 1.6.

----

Because we haven't created any resources in the service-catalog API server yet,
`kubectl get` will return an empty list of resources:

$ kubectl get brokers,serviceclasses,instances,bindings

### Installing a UPS broker
Service Catalog requires brokers to operate and there is a User Provided
Service broker (UPS from now on), which allows consumption of existing
services through the Service Catalog model. Just like any other broker, the
UPS broker needs to be running somewhere before it can be added to the
catalog. We need to deploy it first by using the
[`ups-broker` chart](../charts/ups-broker) into your cluster, just like
you installed the catalog chart above.

### Registering a UPS Broker

Next, we'll register a service broker with the catalog. To do this, we'll
create a new [`Broker`](../contrib/examples/walkthrough/ups-broker.yaml)
resource:

$ kubectl create -f contrib/examples/walkthrough/ups-broker.yaml
broker "ups-broker" created

Kubernetes APIs are intention based; creating this resource indicates that the
want for the service broker it represents to be consumed in the catalog. When
we create the resource, the controller handles loading that broker into the
catalog by seeing what services it provides and adding them to the catalog.

We can check the status of the broker using `kubectl get`:

$ kubectl get brokers ups-broker -o yaml

We should see something like:

```yaml
apiVersion: servicecatalog.k8s.io/v1alpha1
kind: Broker
metadata:
creationTimestamp: 2017-03-03T04:11:17Z
finalizers:
- kubernetes
name: ups-broker
resourceVersion: "6"
selfLink: /apis/servicecatalog.k8s.io/v1alpha1/brokers/ups-broker
uid: 72fa629b-ffc7-11e6-b111-0242ac110005
spec:
url: http://ups-broker.ups-broker.svc.cluster.local:8000
status:
conditions:
- message: Successfully fetched catalog from broker
reason: FetchedCatalog
status: "True"
type: Ready
```
Notice that the controller has set this brokers `status` field to reflect that
it's catalog has been added to our cluster's catalog.

### Viewing ServiceClasses

The controller created a `ServiceClass` for each service that the broker we
added provides. We can view the `ServiceClass` resources available in the
cluster by doing:

$ kubectl get serviceclasses
NAME KIND
user-provided-service ServiceClass.v1alpha1.servicecatalog.k8s.io

It looks like the broker we added provides a service called the `user-provided-
service`. Let's check it out:

$ kubectl get serviceclasses user-provided-service -o yaml

We should see something like:

```yaml
apiVersion: servicecatalog.k8s.io/v1alpha1
kind: ServiceClass
metadata:
creationTimestamp: 2017-03-03T04:11:17Z
name: user-provided-service
resourceVersion: "7"
selfLink: /apis/servicecatalog.k8s.io/v1alpha1/serviceclassesuser-provided-service
uid: 72fef5ce-ffc7-11e6-b111-0242ac110005
brokerName: ups-broker
osbGuid: 4F6E6CF6-FFDD-425F-A2C7-3C9258AD2468
bindable: false
planUpdatable: false
plans:
- name: default
osbFree: true
osbGuid: 86064792-7ea2-467b-af93-ac9694d96d52
```

### Provisioning a new Instance

Let's provision a new instance of the `user-provided-service`. To do this, we
create a new [`Instance`](../contrib/examples/walkthrough/ups-instance.yaml) to
indicate that we want to provision a new instance of that service:

$ kubectl create -f contrib/examples/walkthrough/ups-instance.yaml
instance "ups-instance" created

We can check the status of the `Instance` using `kubectl get`:

$ kubectl get instances -n test-ns ups-instance -o yaml

We should see something like:

```yaml
apiVersion: servicecatalog.k8s.io/v1alpha1
kind: Instance
metadata:
creationTimestamp: 2017-03-03T04:26:08Z
name: ups-instance
namespace: test-ns
resourceVersion: "9"
selfLink: /apis/servicecatalog.k8s.io/v1alpha1/namespaces/test-ns/instances/ups-instance
uid: 8654e626-ffc9-11e6-b111-0242ac110005
spec:
osbGuid: 34c984e1-4626-4574-8a95-9e500d0d48d3
planName: default
serviceClassName: user-provided-service
status:
conditions:
- message: The instance was provisioned successfully
reason: ProvisionedSuccessfully
status: "True"
type: Ready
```

### Bind to the Instance

Now that our `Instance` has been created, let's bind to it. To do this, we
create a new [`Binding`](../contrib/examples/walkthrough/ups-binding.yaml).

$ kubectl create -f contrib/examples/walkthrough/ups-binding.yaml
binding "ups-binding" created

We can check the status of the `Instance` using `kubectl get`:

$ kubectl get bindings -n test-ns ups-binding -o yaml

We should see something like:

```yaml
apiVersion: servicecatalog.k8s.io/v1alpha1
kind: Binding
metadata:
creationTimestamp: 2017-03-07T01:44:36Z
finalizers:
- kubernetes
name: ups-binding
namespace: test-ns
resourceVersion: "29"
selfLink: /apis/servicecatalog.k8s.io/v1alpha1/namespaces/test-ns/bindings/ups-binding
uid: 9eb2cdce-02d7-11e7-8edb-0242ac110005
spec:
instanceRef:
name: ups-instance
osbGuid: b041db94-a5a0-41a2-87ae-1025ba760918
secretName: my-secret
status:
conditions:
- message: Injected bind result
reason: InjectedBindResult
status: "True"
type: Ready
```

Notice that the status has a ready condition set. This means our binding is
ready to use. If we look at the secrets in our `test-ns` namespace in
kubernetes, we should see:

$ kubectl get secrets -n test-ns
NAME TYPE DATA AGE
default-token-3k61z kubernetes.io/service-account-token 3 29m
my-secret Opaque 2 1m

Notice that a secret named `my-secret` has been created in our namespace.

### Unbind from the Instance

Now, let's unbind from the Instance. To do this, we just delete the `Binding`
that we created:

$ kubectl delete -n test-ns bindings ups-binding

If we check the secrets in the `test-ns` namespace, we should see that the
secret we were injected with has been deleted:

$ kubectl get secrets -n test-ns
NAME TYPE DATA AGE
default-token-3k61z kubernetes.io/service-account-token 3 30m

### Deprovision the Instance

Now, we can deprovision the instance. To do this, we just delete the `Instance`
that we created:

$ kubectl delete -n test-ns instances ups-instance

### Delete the broker

When an administrator wants to remove a broker and the services it offers from
the catalog, they can just delete the broker:

$ kubectl delete brokers ups-broker

And we should see that all the `ServiceClass` resources that came from that
broker were cleaned up:

$ kubectl get serviceclasses
No resources found
Check out the [walk-through](WALKTHROUGH.md) for a detailed guide of an example
deployment.
Loading

0 comments on commit e6d16f2

Please sign in to comment.