Skip to content

Commit ab33470

Browse files
committed
first commit
0 parents  commit ab33470

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

97 files changed

+22212
-0
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
venv

.idea/encodings.xml

Lines changed: 4 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

.idea/microservices-chat.iml

Lines changed: 18 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

.idea/misc.xml

Lines changed: 7 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

.idea/modules.xml

Lines changed: 8 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

.idea/vcs.xml

Lines changed: 6 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

.idea/workspace.xml

Lines changed: 349 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

README.md

Lines changed: 189 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,189 @@
1+
# Chat microservices
2+
3+
Example of 3 microservices and a database working in a Kubernetes cluster.
4+
5+
The objetive of this project is to show a real example of our library [PyMS](https://github.com/python-microservices/pyms),
6+
[the template](https://github.com/python-microservices/microservices-template) and
7+
the [scaffold](https://github.com/python-microservices/microservices-scaffold).
8+
9+
The tutorial of "how to create a cluster" is based of this [bitnami tutorial](https://docs.bitnami.com/kubernetes/get-started-kubernetes/)
10+
11+
12+
## Step 1: Configure The Platform
13+
The first step for working with Kubernetes clusters is to have Minikube installed if you have selected to work locally.
14+
15+
Install Minikube in your local system, either by using a virtualization software such as VirtualBox or a local terminal.
16+
17+
* Browse to the [Minikube latest releases page](https://github.com/kubernetes/minikube/releases).
18+
19+
* Select the distribution you wish to download depending on your Operating System.
20+
21+
NOTE: This tutorial assumes that you are using Mac OSX or Linux OS. The Minikube installer for Windows is under development. To get an experimental release of Minikube for Windows, check the Minikube releases page.
22+
23+
* Open a new console window on the local system or open your VirtualBox.
24+
25+
* To obtain the latest Minikube release, execute the following command depending on your OS. Remember to replace the X.Y.Z and OS_DISTRIBUTION placeholders with the latest version and software distribution of Minikube respectively. Check the Minikube latest releases page for more information on this.
26+
27+
```bash
28+
curl -Lo minikube https://storage.googleapis.com/minikube/releases/vX.Y.Z/minikube-OS_DISTRIBUTION-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
29+
```
30+
31+
## Step 2: Create A Kubernetes Cluster
32+
By starting Minikube, a single-node cluster is created. Run the following command in your terminal to complete the creation of the cluster:
33+
34+
```bash
35+
minikube start
36+
```
37+
38+
Set the environment of docker
39+
```bash
40+
eval $(minikube docker-env)
41+
```
42+
43+
To run your commands against Kubernetes clusters, the kubectl CLI is needed. Check step 3 to complete the installation of kubectl.
44+
45+
46+
## Step 3: Install The Kubectl Command-Line Tool
47+
In order to start working on a Kubernetes cluster, it is necessary to install the Kubernetes command line (kubectl). Follow these steps to install the kubectl CLI:
48+
49+
* Execute the following commands to install the kubectl CLI. OS_DISTRIBUTION is a placeholder for the binary distribution of kubectl, remember to replace it with the corresponding distribution for your Operating System (OS).
50+
51+
```bash
52+
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/OS_DISTRIBUTION/amd64/kubectl
53+
chmod +x ./kubectl
54+
sudo mv ./kubectl /usr/local/bin/kubectl
55+
```
56+
57+
TIP: Once the kubectl CLI is installed, you can obtain information about the current version with the kubectl version command.
58+
59+
NOTE: You can also install kubectl by using the sudo apt-get install kubectl command.
60+
61+
* Check that kubectl is correctly installed and configured by running the kubectl cluster-info command:
62+
63+
```bash
64+
kubectl cluster-info
65+
```
66+
67+
NOTE: The kubectl cluster-info command shows the IP addresses of the Kubernetes node master and its services.
68+
69+
![Check Kubernetes cluster info](https://docs.bitnami.com/images/img/platforms/kubernetes/k8-tutorial-31.png)
70+
71+
* You can also verify the cluster by checking the nodes. Use the following command to list the connected nodes:
72+
73+
```bash
74+
kubectl get nodes
75+
```
76+
![Check cluster node](https://docs.bitnami.com/images/img/platforms/kubernetes/k8-tutorial-32-single.png)
77+
78+
79+
* To get complete information on each node, run the following:
80+
81+
```bash
82+
kubectl describe node
83+
```
84+
![Check Kubernetes node info](https://docs.bitnami.com/images/img/platforms/kubernetes/k8-tutorial-33.png)
85+
86+
87+
[Learn more about the kubectl CLI](https://kubernetes.io/docs/user-guide/kubectl-overview/).
88+
89+
## Step 4: Install And Configure Helm And Tiller
90+
The easiest way to run and manage applications in a Kubernetes cluster is using Helm. Helm allows you to perform key operations for managing applications such as install, upgrade or delete. Helm is composed of two parts: Helm (the client) and Tiller (the server). Follow the steps below to complete both Helm and Tiller installation and create the necessary Kubernetes objects to make Helm work with Role-Based Access Control (RBAC):
91+
92+
* To install Helm, run the following commands:
93+
94+
```bash
95+
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
96+
chmod 700 get_helm.sh
97+
./get_helm.sh
98+
```
99+
100+
TIP: If you are using OS X you can install it with the brew install command: brew install kubernetes-helm.
101+
102+
* Create a ClusterRole configuration file with the content below. In this example, it is named clusterrole.yaml.
103+
104+
```yaml
105+
apiVersion: rbac.authorization.k8s.io/v1
106+
kind: ClusterRole
107+
metadata:
108+
annotations:
109+
rbac.authorization.kubernetes.io/autoupdate: "true"
110+
labels:
111+
kubernetes.io/bootstrapping: rbac-defaults
112+
name: cluster-admin
113+
rules:
114+
- apiGroups:
115+
- '*'
116+
resources:
117+
- '*'
118+
verbs:
119+
- '*'
120+
- nonResourceURLs:
121+
- '*'
122+
verbs:
123+
- '*'
124+
```
125+
126+
* To create the ClusterRole, run this command:
127+
128+
```bash
129+
kubectl create -f clusterrole.yaml
130+
```
131+
132+
* To create a ServiceAccount and associate it with the ClusterRole, use a ClusterRoleBinding, as below:
133+
134+
```bash
135+
kubectl create serviceaccount -n kube-system tiller
136+
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
137+
```
138+
139+
* Initialize Helm as shown below:
140+
141+
```bash
142+
helm init --service-account tiller
143+
```
144+
If you have previously initialized Helm, execute the following command to upgrade it:
145+
146+
```bash
147+
helm init --upgrade --service-account tiller
148+
```
149+
* Check if Tiller is correctly installed by checking the output of kubectl get pods as shown below:
150+
151+
```bash
152+
kubectl --namespace kube-system get pods | grep tiller
153+
tiller-deploy-2885612843-xrj5m 1/1 Running 0 4d
154+
```
155+
156+
Once you have installed Helm, a set of useful commands to perform common actions is shown below:
157+
158+
![Install Helm](https://docs.bitnami.com/images/img/platforms/kubernetes/k8-tutorial-41.png)
159+
160+
## Step 5
161+
162+
Create the docker images:
163+
164+
```bash
165+
docker build -t chat_db:v1 -f chat_db/Dockerfile chat_db/
166+
docker build -t chat_svc:v1 -f chat_svc/Dockerfile chat_svc/
167+
docker build -t chat_front:v1 -f chat_front/Dockerfile chat_front/
168+
169+
```
170+
171+
Check your helm charts:
172+
173+
```bash
174+
helm install --dry-run --debug ./chat_db/chat_db/
175+
helm install --dry-run --debug ./chat_svc/chat_svc/
176+
helm install --dry-run --debug ./chat_front/chat_front/
177+
```
178+
179+
Install helm charts:
180+
181+
```bash
182+
helm install --name chat-db ./chat_db/chat_db/
183+
helm install --name chat-svc ./chat_svc/chat_svc/
184+
helm install --name chat-front ./chat_front/chat_front/
185+
```
186+
187+
# Step 6
188+
189+
Open http://127.0.0.1.nip.io/ and see the magic! ;)

chat_db/.gitignore

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
# Byte-compiled / optimized / DLL files
2+
__pycache__/
3+
*.py[cod]
4+
*$py.class
5+
6+
# C extensions
7+
*.so
8+
9+
# Distribution / packaging
10+
.Python
11+
env/
12+
build/
13+
develop-eggs/
14+
dist/
15+
downloads/
16+
eggs/
17+
.eggs/
18+
lib/
19+
lib64/
20+
parts/
21+
sdist/
22+
var/
23+
wheels/
24+
*.egg-info/
25+
.installed.cfg
26+
*.egg
27+
28+
# PyInstaller
29+
# Usually these files are written by a python script from a template
30+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
31+
*.manifest
32+
*.spec
33+
34+
# Installer logs
35+
pip-log.txt
36+
pip-delete-this-directory.txt
37+
38+
# Unit test / coverage reports
39+
htmlcov/
40+
.tox/
41+
.coverage
42+
.coverage.*
43+
.cache
44+
nosetests.xml
45+
coverage.xml
46+
*.cover
47+
.hypothesis/
48+
49+
# Translations
50+
*.mo
51+
*.pot
52+
53+
# Django stuff:
54+
*.log
55+
local_settings.py
56+
57+
# Flask stuff:
58+
instance/
59+
.webassets-cache
60+
61+
# Scrapy stuff:
62+
.scrapy
63+
64+
# Sphinx documentation
65+
docs/_build/
66+
67+
# PyBuilder
68+
target/
69+
70+
# Jupyter Notebook
71+
.ipynb_checkpoints
72+
73+
# pyenv
74+
.python-version
75+
76+
# celery beat schedule file
77+
celerybeat-schedule
78+
79+
# SageMath parsed files
80+
*.sage.py
81+
82+
# dotenv
83+
.env
84+
85+
# virtualenv
86+
.venv
87+
venv/
88+
ENV/
89+
90+
# Spyder project settings
91+
.spyderproject
92+
.spyproject
93+
94+
# Rope project settings
95+
.ropeproject
96+
97+
# mkdocs documentation
98+
/site
99+
100+
# mypy
101+
.mypy_cache/
102+
103+
.idea
104+
105+
flake8Report.txt
106+
pylintReport.txt
107+
db.sqlite3
108+
_build

chat_db/Dockerfile

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
FROM python:3.6.4-alpine3.7
2+
3+
RUN apk add --update curl gcc g++ git libffi-dev openssl-dev python3-dev build-base linux-headers \
4+
&& rm -rf /var/cache/apk/*
5+
RUN ln -s /usr/include/locale.h /usr/include/xlocale.h
6+
7+
ENV PYTHONUNBUFFERED=1 APP_HOME=/microservice/ CONFIGMAP_FILE="$APP_HOME"config-docker.yml
8+
9+
RUN mkdir $APP_HOME && adduser -S -D -H python && chown -R python $APP_HOME
10+
WORKDIR $APP_HOME
11+
12+
ADD requirement*.txt $APP_HOME
13+
RUN pip install --upgrade pip && pip install -r requirements-docker.txt
14+
15+
ADD . $APP_HOME
16+
17+
EXPOSE 8080
18+
USER python
19+
20+
CMD ["gunicorn", "--worker-class", "gevent", "--workers", "1", "--log-level", "INFO", "--bind", "0.0.0.0:8080", "manage:app"]

0 commit comments

Comments
 (0)