Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: access application #798

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
88 changes: 27 additions & 61 deletions blog/2022-06-27-terraform-integrate-with-vela.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,22 +10,21 @@ image: https://raw.githubusercontent.com/oam-dev/KubeVela.io/main/docs/resources
hide_table_of_contents: false
---

If you're looking for something to glue Terraform ecosystem with the Kubernetes world, congratulations! You will get exactly what you want here in this blog.
If you're looking for something to glue Terraform ecosystem with the Kubernetes world, congratulations! You're getting exactly what you want in this blog.

We will introduce how to integrate terraform modules into KubeVela by fixing a real world problem -- "Fixing the Developer Experience of Kubernetes Port Forwarding". This idea was inspired by [article](https://inlets.dev/blog/2022/06/24/fixing-kubectl-port-forward.html) from Alex Ellis.
We will introduce how to integrate terraform modules into KubeVela by fixing a real world problem -- "Fixing the Developer Experience of Kubernetes Port Forwarding" inspired by [article](https://inlets.dev/blog/2022/06/24/fixing-kubectl-port-forward.html) from Alex Ellis.

In general, this article will divide into two parts:
In general, this article will be divided into two parts:

* Part.1 will introduce how to glue Terraform with KubeVela, it needs some basic knowledge of both Terraform and KubeVela. You can just skip this part if you don't want to extend KubeVela as a Developer.
* Part.2 will introduce how KubeVela can "Fix the Developer Experience of Kubernetes Port Forwarding" by using the solution provided in Part.1 .
* Part.2 will introduce how KubeVela can 1) provision a Cloud ECS instance by KubeVela with public IP; 2) Use the ECS instance as a tunnel sever to provide public access for any container service within an intranet environment.

OK, let's go!


## Part 1. Glue Terraform Module as KubeVela Capability

[KubeVela](https://kubevela.net/docs/) is a modern software delivery control plane.
The first question you may ask is "What benefit from doing this":
In general, [KubeVela](https://kubevela.net/docs/) is a modern software delivery control plane, you may ask: "What benefit from doing this":

1. The power of gluing Terraform with Kubernetes ecosystem including Helm Charts in one unified solution, that helps you to do GitOps, CI/CD integration and application lifecycle management.
- Thinking of deploy a product that includes Cloud Database, Container Service and several helm charts, now you can manage and deploy them together without switching to different tools.
Expand Down Expand Up @@ -104,7 +103,7 @@ We'll use the terraform module we have already prepared just now.
vela def init ecs --type component --provider alibaba --desc "Terraform configuration for Alibaba Cloud Elastic Compute Service" --git https://github.com/wonderflow/terraform-alicloud-ecs-instance.git > alibaa-ecs.yaml
```

Change the git url with your own if you have customized.
> Change the git url with your own if you have customized.

* Apply it to the vela control plane

Expand Down Expand Up @@ -256,65 +255,21 @@ By now, we have finished the server part here.

### Use frp client in KubeVela

The usage of frp client is very straight-forward, we can provide public IP for any of the workload inside the cluster.
The usage of frp client is very straight-forward, we can provide public IP for any of the service inside the cluster.

![](../docs/resources/terraform-ecs.png)

1. Proxy local port as a sidecar.
1. Deploy as standalone to proxy for any [Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/).

```shell
cat <<EOF | vela up -f -
# YAML begins
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: vela-app-with-sidecar
spec:
components:
- name: web
type: webservice
properties:
image: oamdev/hello-world:v2
ports:
- port: 8000
traits:
- type: sidecar
properties:
name: frp-client
image: oamdev/frpc:0.43.0
env:
- name: server_addr
value: "121.196.106.174"
- name: server_port
value: "9090"
- name: local_port
value: "8000"
- name: connect_name
value: "my_web"
- name: local_ip
value: "127.0.0.1"
- name: remote_port
value: "8082"
# YAML ends
EOF
```

Wow! Then you can visiting the `hello-world` by:

```
curl 121.196.106.174:8082
```

2. Deploy as standalone to proxy for any [Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/).

```shell
```yaml
cat <<EOF | vela up -f -
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: frp-proxy
spec:
components:
- name: frp
- name: frp-proxy
type: worker
properties:
image: oamdev/frpc:0.43.0
Expand All @@ -334,9 +289,11 @@ spec:
EOF
```

In this case, we specify the `local_ip` which means we're visiting the Kubernetes Service with name `velaux` in the namespace `vela-system`. As a result, you can visit velaux service from the public IP `121.196.106.174:8083`.
In this case, we specify the `local_ip` by `velaux.vela-system`, which means we're visiting the Kubernetes Service with name `velaux` in the namespace `vela-system`.

As a result, you can visit velaux service from the public IP `121.196.106.174:8083`.

3. Compose them together for the same lifecycle.
2. Compose two components together for the same lifecycle.

```yaml
cat <<EOF | vela up -f -
Expand Down Expand Up @@ -370,20 +327,29 @@ spec:
- name: local_ip
value: "web-new.default"
- name: remote_port
value: "8081"
value: "8082"
EOF
```

The `webservice` type component will generate a service with the name of the component automatically. The `frp` component will proxy the traffic to the service `web-new` in the `default` namespace which is exactly the service generated.
Wow! Then you can visiting the `hello-world` by:

```
curl 121.196.106.174:8082
```

The `webservice` type component will generate a service with the name of the component automatically. The `frp-web` component will proxy the traffic to the service `web-new` in the `default` namespace which is exactly the service generated.

When the application deleted, all of the resources defined in the same app are deleted together.

You can also compose the database together with them, then you can deliver all components needed in one time.

### Clean Up

You can clean up all the applications in the demo by `vela delete`:

```
vela delete composed-app -y
vela delete frp-proxy -y
vela delete vela-app-with-sidecar -y
vela delete ecs-demo -y
```

Expand Down
Binary file added docs/resources/terraform-ecs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
---
title: Cert-manager
title: 证书管理器
---

This addon is for cert-manager, which is managing the kubernetes certificates.
证书管理器插件提供了 kubernetes 证书管理的功能。

Install the certificate manager on your Kubernetes cluster to enable adding the webhook component (only needed once per Kubernetes cluster).
在你的 kubernetes 集群中安装证书管理器以添加这个 webhook 组件(仅需要在每个集群中运行一次)。

## Install
## 安装插件

```shell
vela addon enable cert-manager
```

## Uninstall
## 卸载插件

```shell
vela addon disable cert-manager
```
```
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
# flink-kubernetes-operator

A Kubernetes operator for Apache Flink(https://github.com/apache/flink-kubernetes-operator), it allows users to manage Flink applications and their lifecycle through native k8s tooling like kubectl.
Apache Flink (https://github.com/apache/flink-kubernetes-operator) 的 Kubernetes operator,它允许用户通过 kubectl 等原生 k8s 工具来管理 Flink 应用程序及其生命周期。

## Install
## 安装插件

```shell
vela addon enable flink-kubernetes-operator
```

## Uninstall
## 卸载插件

```shell
vela addon disable flink-kubernetes-operator
```

## Check the flink-kubernetes-operator running status
## 检查 flink-kubernetes-operator 的运行状态

Since this addon dependents `fluxcd` and `cert-manager` addon, so will enable them automatically. Check the status of them:
由于这个插件依赖于`fluxcd``cert-manager`插件,所以会自动启用它们。执行下面的命令来检查它们的状态:
```shell
$ vela ls -n vela-system
APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME
Expand All @@ -30,8 +30,7 @@ addon-fluxcd flux-system-namespace raw

```


Show the component type `flink-cluster`, so we know how to use it in one application. As a flink user, you can choose the parameter to set for your flink cluster
通过显示`flink-cluster`组件的字段类型,让我们知道如何在一个应用程序中使用它们。 作为 flink 用户,你可以选择它们作为你的 flink 集群的设置参数。
```shell
vela show flink-cluster
# Properties
Expand All @@ -54,12 +53,11 @@ vela show flink-cluster
+--------------+-------------+--------+----------+---------------------------------------------------------------+
```

## Example for how to run a component typed flink-cluster in application

First please make sure your cluster already exists namespace `flink-home`.
## 在应用中运行一个 flink-cluster 类型组件的示例

Then deploy the application:
首先请确保你的集群已经存在命名空间`flink-home`。

然后部署下面的应用:
```shell
cat <<EOF | vela up -f -
apiVersion: core.oam.dev/v1beta1
Expand All @@ -76,7 +74,7 @@ components:
EOF
```

Check all the related resources:
检查相关资源的状态:

```shell
vela status flink-app-v1
Expand Down Expand Up @@ -108,4 +106,4 @@ Services:
No trait applied
```

You can see you first flink-cluster application is running!
你会看到你的第一个 flink-cluster 应用已经处于运行状态了。
Original file line number Diff line number Diff line change
@@ -1,26 +1,26 @@
---
title: kubevela.io
title: kubevela-io
---

This addon is the document website, align with https://kubevela.io/ and its mirror https://kubevela.net/ .
这个插件是一个文档网站,文档内容与 https://kubevela.io/ 及其镜像 https://kubevela.net/ 是同步更新的。

Use this addon can help you to read the document in your cluster which can be air-gaped environment.
使用此插件可以帮助你在集群中无缝查阅文档。

## install
## 安装插件

```shell
vela addon enable kubevela-io
```

## uninstall
## 卸载插件

```shell
vela addon disable kubevela-io
```

## more notes
- About the image to deploy
- The image in this addon is oam-dev/kubevela-io:latest in default. you can pull the image from the dockerhub or you can compile the source code and built it to an image, then push your own local hub.
- About the way to access the local kubevela-io website
- You can use the NodePort service which is deployed in vela-system named kubevela-io-np
- You may use the ingress as you wish
## 更多信息
- 关于部署的镜像
- 此插件中使用的镜像默认为 oam-dev/kubevela-io:latest。 你可以从 dockerhub 拉取镜像,也可以编译源代码并将其构建为镜像,然后推送到你自己的本地镜像仓库。
- 关于如何访问本地 kubevela-io 文档站点
- 你可以使用已经部署在 vela-system 命名空间里的名为 kubevela-io-np 的 NodePort 服务。
- 当然你也可以使用 ingress 来访问。
Original file line number Diff line number Diff line change
Expand Up @@ -2,31 +2,23 @@
title: OCM Cluster-Gateway Manager
---

__TL;DR__: "OCM Cluster-Gateway Manager" addon installs an operator component
into the hub cluster that help the administrator to easily operate the
configuration of cluster-gateway instances via "ClusterGatewayConfiguration"
custom resource. *WARNING* this addon will restart the cluster-gateway
instances upon the first-time installation.
OCM Cluster-Gateway Manager 插件在管理集群中安装了一个 operator 组件,帮助管理员通过自定义资源 ClusterGatewayConfiguration 轻松操作集群网关实例的配置。 *警告*:此插件将在首次安装时重新启动集群网关实例。

## What does "Cluster-Gateway Manager" do?
## Cluster-Gateway Manager 可以做什么?

Basically it helps us to sustainably operate the cluster-gateway instances from
the following aspects:
它可以在以下几个方面来帮助我们:

* Automatic cluster-gateway's server TLS certificate rotation.
* Automatic cluster discovery.
* Structurize the component configuration for cluster-gateway.
* Manages the "egress identity" for cluster-gateway to access each clusters.
* 自动轮换集群网关服务器的 TLS 证书。
* 自动发现新集群。
* 结构化集群网关的组件配置。
* 管理集群网关访问每个集群的“出口认证”。

Note that the requests proxied by cluster-gateway will use the identity of
`open-cluster-management-managed-serviceaccount/cluster-gateway` to access
the managed clusters, and by default w/ cluster-admin permission, so please
be mindful of that.
请注意,由 cluster-gateway 代理的请求将使用`open-cluster-management-managed-serviceaccount/cluster-gateway`的身份访问托管集群,并且默认情况下具有 cluster-admin 权限,因此请注意这一点。


### How to confirm if the addon installation is working?
### 如何确定插件是正常运转的?

Run the following commands to check the healthiness of the addons:
运行以下命令来检查插件的健康状况:

```shell
$ kubectl -n <cluster> get managedclusteraddon
Expand All @@ -37,14 +29,13 @@ NAMESPACE NAME AVAILABLE DEGRADED PROGRESSING
<cluster> managed-serviceaccount True
```

In case you have too many clusters to browse at a time, install the command-line
binary via:
如果一次要浏览的集群太多,请通过下面的命令来安装二进制命令`clusteradm`:

```shell
curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
```

Then run the following commands to see the details of the addon:
然后通过运行下面的命令来获取多集群插件状态:

```shell
$ clusteradm get addon
Expand Down Expand Up @@ -72,10 +63,9 @@ $ clusteradm get addon
└── ...
```

### Sample of ClusterGatewayConfiguration API
### ClusterGatewayConfiguration API 配置示例

You can read or edit the overall configuration of cluster-gateway deployments
via the following command:
你可以通过以下命令来查看或编辑集群网关部署的整体配置信息:

```shell
$ kubectl get clustergatewayconfiguration -o yaml
Expand Down
Loading